September 05, 2025

Antonio Terceiro

Past halfway there: history of autopkgtest support in Debian

The Release of Debian 13 ("Trixie") last month marked another milestone on the effort to provide automated test support for Debian packages in their installed form. We have achieved the mark of 57% of the source packages in the archive declaring support for autopkgtest.

Release Packages with tests Total number of packages % of packages with tests
wheezy 5 17175 0%
jessie 1112 20596 5%
stretch 5110 24845 20%
buster 9966 28501 34%
bullseye 13949 30943 45%
bookworm 17868 34341 52%
trixie 21527 37670 57%

The code that generated this table is provided at the bottom.

The growth rate has been consistently decreasing at each release after stretch. That probably means that the low hanging fruit -- adding support en masse for large numbers of similar packages, such as team-maintained packages for a given programming language -- has been picked, and from now on the work gets slightly harder. Perhaps there is a significant long tail of packages that will never get autopkgtest support.

Looking for common prefixes among the packages missing a Testsuite: field gives me us the largest groups of packages missing autopkgtest support:

$ grep-dctrl -v -F Testsuite --regex -s Package -n . trixie | cut -d - -f 1 | uniq -c | sort -n| tail -20
     50 apertium
     50 kodi
     51 lomiri
     53 maven
     55 libjs
     57 globus
     66 cl
     67 pd
     72 lua
     79 php
     88 puppet
     91 r
    111 gnome
    124 ruby
    140 ocaml
    152 rust
    178 golang
    341 fonts
    557 python
   1072 haskell

There seems to be a fair amount of Haskell and Python. If someone could figure out a way of testing installed fonts in a meaningful way, this would a be a good niche where we can cover 300+ packages.

There is a another analysis that can be made, which I didn't: which percentage of new packages introduced in a given release have declared autopkgtest support, compared with the total of new packages in that release? My data only counts the totals, so we start with the technical debt of the almost all of the 17,000 packages with no tests in wheezy, which was the stable at the time I started Debian CI. How many of those got tests since then?

Note that not supporting autopkgtest does not mean that a package is not tested at all: it can run build-time tests, which are also useful. Not supporting autopkgtest, though, means that their binaries in the archive can't be automatically tested in their installed, but then there is a entire horde of volunteers running testing and unstable on a daily basis who test Debian and report bugs.

This is the script that produced the table in the beginning of this post:

#!/bin/sh

set -eu

extract() {
  local release
  local url
  release="$1"
  url="$2"

  if [ ! -f "${release}" ]; then
    rm -f "${release}.gz"
    curl --silent -o ${release}.gz "${url}"
    gunzip "${release}.gz"
  fi

  local with_tests
  local total
  with_tests="$(grep-dctrl -c -F Testsuite --regex . $release)"
  total="$(grep-dctrl -c -F Package --regex . $release)"

  echo "| ${release} | ${with_tests} | ${total} | $((100*with_tests/total))% |"
}

echo "| **Release** | **Packages with tests** | **Total number of packages** | **% of packages with tests** |"
echo "|-------------|-------------------------|------------------------------|------------------------------|"
for release in wheezy jessie stretch buster; do
  extract "${release}" "http://archive.debian.org/debian/dists/${release}/main/source/Sources.gz"
done
for release in bullseye bookworm trixie; do
  extract "${release}" "http://ftp.br.debian.org/debian/dists/${release}/main/source/Sources.gz"
done

05 September, 2025 08:22PM

September 04, 2025

Noah Meyerhans

False Positives

There are times when an email based workflow gets really difficult. One of those times is when discussing projects related to spam and malware detection.

 noahm@debian.org
host stravinsky.debian.org [2001:41b8:202:deb::311:108]
SMTP error from remote mail server after end of data:
550-malware detected: Sanesecurity.Phishing.Fake.30934.1.UNOFFICIAL:
550 message rejected
submit@bugs.debian.org
host stravinsky.debian.org [2001:41b8:202:deb::311:108]
SMTP error from remote mail server after end of data:
550-malware detected: Sanesecurity.Phishing.Fake.30934.1.UNOFFICIAL:
550 message rejected

This was, in fact, a false positive. And now, because reportbug doesn’t record outgoing messages locally, I need to retype the whole thing.

(NB. this is not a complaint about the policies deployed on the Debian mail servers; they’d be negligent if they didn’t implement such policies on today’s internet.)

04 September, 2025 02:53PM by Noah Meyerhans (frodo+blog@morgul.net)

September 03, 2025

hackergotchi for Joachim Breitner

Joachim Breitner

F91 in Lean

Back in March, with version 4.17.0, Lean introduced partial_fixpoint, a new way to define recursive functions. I had drafted a blog post for the official Lean FRO blog back then, but forgot about it, and with the Lean FRO blog discontinued, I’ll just publish it here, better late than never.

With the partial_fixpoint mechanism we can model possibly partial functions (so those returning an Option) without an explicit termination proof, and still prove facts about them. See the corresponding section in the reference manual for more details.

On the Lean Zulip, I was asked if we can use this feature to define the McCarthy 91 function and prove it to be total. This function is a well-known tricky case for termination proofs.

First let us have a brief look at why this function is tricky to define in a system like Lean. A naive definition like

def f91 (n : Nat) : Nat :=
  if n > 100
  then n - 10
  else f91 (f91 (n + 11))

does not work; Lean is not able to prove termination of this functions by itself.

Even using well-founded recursion with an explicit measure (e.g. termination_by 101 - n) is doomed, because we would have to prove facts about the function’s behaviour (namely that f91n = f91101 = 91 for 90 ≤ n ≤ 100) and at the same time use that fact in the termination proof that we have to provide while defining the function. (The Wikipedia page spells out the proof.)

We can make well-founded recursion work if we change the signature and use a subtype on the result to prove the necessary properties while we are defining the function. Lean by Example shows how to do it, but for larger examples this approach can be hard or tedious.

With partial_fixpoint, we can define the function as a partial function without worrying about termination. This requires a change to the function’s signature, returning an Option Nat:

def f91 (n : Nat) : Option Nat :=
  if n > 100
    then pure (n - 10)
    else f91 (n + 11) >>= f91
partial_fixpoint

From the point of view of the logic, Option.none is then used for those inputs for which the function does not terminate.

This function definition is accepted and the function runs fine as compiled code:

#eval f91 42

prints some 91.

The crucial question is now: Can we prove anything about f91 In particular, can we prove that this function is actually total?

Since we now have the f91 function defined, we can start proving auxillary theorems, using whatever induction schemes we need. In particular we can prove that f91 is total and always returns 91 for n ≤ 100:

theorem f91_spec_high (n : Nat) (h : 100 < n) : f91 n = some (n - 10) := by
  unfold f91; simp [*]

theorem f91_spec_low (n : Nat) (h₂ : n ≤ 100) : f91 n = some 91 := by
  unfold f91
  rw [if_neg (by omega)]
  by_cases n < 90
  · rw [f91_spec_low (n + 11) (by omega)]
    simp only [Option.bind_eq_bind, Option.some_bind]
    rw [f91_spec_low 91 (by omega)]
  · rw [f91_spec_high (n + 11) (by omega)]
    simp only [Nat.reduceSubDiff, Option.some_bind]
    by_cases h : n = 100
    · simp [f91, *]
    · exact f91_spec_low (n + 1) (by omega)

theorem f91_spec (n : Nat) : f91 n = some (if n ≤ 100 then 91 else n - 10) := by
  by_cases h100 : n ≤ 100
  · simp [f91_spec_low, *]
  · simp [f91_spec_high, Nat.lt_of_not_le ‹_›, *]

-- Generic totality theorem
theorem f91_total (n : Nat) : (f91 n).isSome := by simp [f91_spec]

(Note that theorem f91_spec_low is itself recursive in a somewhat non-trivial way, but Lean can figure that out all by itself. Use termination_by? if you are curious.)

This is already a solid start! But what if we want a function of type f91! (n : Nat) : Nat, without the Option? Then can derive that from the partial variant, as we have just proved that to be actually total:

def f91! (n : Nat) : Nat  := (f91 n).get (f91_total n)

theorem f91!_spec (n : Nat) : f91! n = if n ≤ 100 then 91 else n - 10 := by
  simp [f91!, f91_spec]

Using partial_fixpoint one can decouple the definition of a function from a termination proof, or even model functions that are not terminating on all inputs. This can be very useful in particular when using Lean for program verification, such as with the aeneas package, where such partial definitions are used to model Rust programs.

03 September, 2025 08:18PM by Joachim Breitner (mail@joachim-breitner.de)

Enrico Zini

CAdES signatures on Debian

CAdES is a digital signature standard that is used and sometimes mandated, by the Italian Public Administration.

To be able to do my job, I own a Carta Nazionale dei Servizi (CNS) with which I can generate legally binding signatures. Now comes the problem of finding a software to do it.

Infocamere Firma4NG

InfoCamere are distributing a software called Firma4NG, with a Linux option, which, I'm pleased to say, seems to work just fine.

Autofirma

AutoFirma is a Java software for digital signatures distributed by the Spanish government, which has a Linux version.

It is licensed as GPL-2+ | EUPL-1.1, and the source seems to be here.

While my Spanish is decent I lack jargon for this specific field, and I didn't manage to make it work with my CNS.

Autogram

Andrej Shadura pointed me to Autogram, a Slovakian software for digital signatures, licensed under the EUPL-1.2.

The interface is still only in Slovakian, so tried it but I didn't go very far in trying to make it work.

OpenSSL

In trixie, openssl is almost, but not quite, able to do it. Here's as far as I've got.

Install opensc

apt install opensc

Test if you can access the smart card with:

pkcs11-tool --list-objects [-l]

You can find other pkcs11-tool examples here

Set up a pkcs11 provider for openssl

apt install pkcs11-provider

Edit /etc/ssl/openssl.cnf:

  • In [provider_sect] add pkcs11 = pkcs11_sect
  • In [default_sect], uncomment activate = 1
  • Add this new section:
[pkcs11_sect]
module = /usr/lib/x86_64-linux-gnu/ossl-modules/pkcs11.so
pkcs11-module-path = /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so
default_algorithms = ALL
activate = 1

Test with openssl list -providers

You can check if openssl can see keys on the card:

openssl pkey -in 'pkcs11:id=%01' -pubin -pubout -text

See PKCS11 URI documentation here.

Install the PKCS11 engine for openssl

apt install libengine-pkcs11-openssl

It looks like providers replaced engines, and this would not be needed, but I couldn't find a way to convince openssl to work without this.

Sign a document

openssl cms -nodetach -binary -cades -outform DER -in filename -out filename.p7m -sign -signer 'pkcs11:id=%01' -keyform engine -engine pkcs11

It verifies correctly using the Austrian verification system.

All the Italian verification systems I tried, however, complain that, although the signature is valid, the certificate is emitted by an unqualified CA and the certificate revocation information cannot be found.

PAdES

When signing PDF files, the PAdES standard is sometimes accepted.

LibreOffice is able to generate PAdES signatures using the "File / Digital signatures…" menu, and provided the smart card is in the reader it is able to use it. Both LibreOffice and Okular can verify that the signature is indeed there.

However, when trying to validate the signature using Italian validators, I get the same complaints about unqualified CAs and missing revocation information.

Wall of shame

Dike GoSign

Infocert (now Tinexta) used to distribute a software called "Dike GoSign" that worked on Ubuntu, which I used on a completely isolated VM, and it was awful but it worked.

I had to regenerate the VM for it, and discovered that the version they distribute now will refuse to work unless one signs in online with a Tinexta account. From the same company that asks you to install their own root certifiactes to use their digital signature system.

Gross.

Dropped.

Aruba Sign

Aruba used to distribute a software called Aruba Sign, which also worked on Ubuntu.

Ubuntu support has been discontinued, and they now only offer support for Windows or Mac.

Yuck. Dropped.

03 September, 2025 03:38PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in August 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

Python team

forky is open! As a result I’m starting to think about the upcoming Python 3.14. At some point we’ll doubtless do a full test rebuild, but in advance of that I concluded that one of the most useful things I could do would be to work on our very long list of packages with new upstream versions. Of course there’s no real chance of this ever becoming empty since upstream maintainers aren’t going to stop work for that long, but there are a lot of packages there where we’re quite a long way out of date, and many of those include fixes that we’ll need for 3.14, either directly or by fixing interactions with new versions of other packages that in turn will need to be fixed. We can backport changes when we need to, but more often than not the most efficient way to do things is just to keep up to date.

So, I upgraded these packages to new upstream versions (deep breath):

  • aioftp
  • aiosignal (building on work by IanLucca)
  • audioop-lts
  • celery
  • djangorestframework
  • djoser
  • fpylll
  • frozenlist
  • git-repo-updater
  • ipykernel
  • klepto
  • kombu
  • multipart
  • netmiko (sponsoring work by Eduardo Silva; contributed supporting fix upstream)
  • pathos
  • ppft
  • pydantic
  • pydantic-core
  • pydantic-settings
  • pylsqpack
  • pymssql
  • pytest-mock
  • pytest-pretty
  • pytest-repeat
  • pytest-rerunfailures
  • python-a2wsgi
  • python-apptools (sponsoring work by Kathlyn Lara Murussi)
  • python-asgiref
  • python-asyncssh
  • python-bitarray
  • python-bitstring
  • python-bytecode
  • python-channels-redis
  • python-charset-normalizer
  • python-daphne
  • python-django-analytical
  • python-django-guid
  • python-django-health-check
  • python-django-pgbulk
  • python-django-pgtrigger
  • python-django-postgres-extra
  • python-django-storages
  • python-holidays
  • python-httpx-sse
  • python-icalendar
  • python-lazy-model
  • python-line-profiler
  • python-lz4
  • python-marshmallow-dataclass
  • python-mastodon
  • python-model-bakery
  • python-oauthlib
  • python-parse-type
  • python-pathvalidate
  • python-pgspecial
  • python-processview
  • python-pytest-subtests
  • python-roman
  • python-semantic-release
  • python-testfixtures
  • python-time-machine
  • python-tokenize-rt
  • python-typeguard
  • python-typing-extensions
  • python-urllib3
  • pyupgrade
  • requests (fixing CVE-2024-47081)
  • responses
  • zope.deferredimport
  • zope.schema
  • zope.testrunner

That’s only about 10% of the backlog, but of course others are working on this too. If we can keep this up for a while then it should help.

I packaged pytest-run-parallel, pytest-unmagic (still in NEW), and python-forbiddenfruit (still in NEW), all needed as new dependencies of various other packages.

setuptools upstream will be removing the setup.py install command on 31 October. While this may not trickle down immediately into Debian, it does mean that in the near future nearly all Python packages will have to use pybuild-plugin-pyproject (note that this does not mean that they necessarily have to use pyproject.toml; this is just a question of how the packaging runs the build system). We talked about this a bit at DebConf, and I said that I’d noticed a number of packages where this isn’t straightforward and promised to write up some notes. I wrote the Python/PybuildPluginPyproject wiki page for this; I expect to add more bits and pieces to it as I find them.

On that note, I converted several packages to pybuild-plugin-pyproject:

  • billiard
  • lazr.config
  • python-timeline
  • zope.sqlalchemy
  • zope.testing

I fixed several build/test failures:

I fixed some other bugs:

I reviewed Debian defaults: nftables as banaction and systemd as backend, but it looked as though nothing actually needed to be changed so we closed this with no action.

Rust team

Upgrading Pydantic was complicated, and required a rust-pyo3 transition (which Jelmer Vernooij started and Peter Michael Green has mostly been driving, thankfully), packaging rust-malloc-size-of (including an upstream portability fix), and upgrading several packages to new upstream versions:

  • rust-serde
  • rust-serde-derive
  • rust-serde-json
  • rust-smallvec
  • rust-speedate
  • rust-time
  • rust-time-core
  • rust-time-macros

bugs.debian.org

I fixed bugs.debian.org: misspelled checkbox id “uselessmesages”, as well as a bug that caused incoming emails with certain header contents to go missing.

OpenSSH

I fixed openssh-server: refuses further connections after having handled PerSourceMaxStartups connections with a cherry-pick from upstream.

Other bits and pieces

I upgraded libfido2 to a new upstream version.

I fixed mimalloc: FTBFS on armhf: cc1: error: ‘-mfloat-abi=hard’: selected architecture lacks an FPU, which was blocking changes to pendulum in the Python team. I also spent some time helping to investigate libmimalloc3: Illegal instruction Running mtxrun —generate, though that bug is still open.

I fixed various autopkgtest bugs in gssproxy, prompted by #1007 in Debusine.

Since my old team is decommissioning Bazaar/Breezy code hosting in Launchpad (the end of an era, which I have distinctly mixed feelings about), I converted Storm to git.

03 September, 2025 10:56AM by Colin Watson

Paul Wise

FLOSS Activities August 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

  • Obsolete conffile in zmap

All work was done on a volunteer basis.

03 September, 2025 03:59AM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in August 2025

03 September, 2025 01:14AM by Ben Hutchings

Valhalla's Things

English Paper Piecing, Done Wrong

Posted on September 3, 2025
Tags: madeof:bits

A square mat made of orange, green and grey knit fabric hexagons sewn together.

For quite some time, I have been thinking about trying a bit of patchwork, and English Paper Piecing looked like a technique suited to my tastes, with the handsewing involved and the fact of having a paper pattern of sort and everything.

The problem is, most of the scraps of fabric I get from my sewing aren’t really suitable for quilting, with a lot of them being either too black and too thick or too white and too thin.

The other side of the same mat, made of orange and green squares.

On the other hand, my partner wears polo shirts at work, and while I try to mend the holes that form, after a while the edges get worn, and they just are no longer suitable for the office, even with some creative mending, and they get downgraded to home wear. But then more office shirts need to be bought, and the home ones accumulate, and there is only so much room for polo shirts in the house, and the worst ones end up in my creative reuse pile.

Some parts are worn out and they will end up as cabbage stuffing for things, but some are still in decent enough conditions and could be used as fabric.

But surely, for English Paper Piecing you’d need woven fabric, not knit, even if it’s the dense piqué used in polo shirts, right? Especially if it’s your first attempt at the technique, right?

The hexagon side of the mat, with my hexagonal pattern weights decorated with Standard Compliant stickers: they fit exactly on the mat pattern.

Well, probably it wouldn’t work with complex shapes, but what about some 5-ish cm tall Standard Compliant bestagon? So I printed out some hexagons on thick paper, printed some bigger hexagons with sewing allowance as a cutting aid, found two shirts in the least me colours I could find (and one in grey because it was the best match for the other two) and decided to sacrifice them for the experiment.

And as long as the paper was still in the pieces, the work went nicely, so I persevered while trying to postpone the Moment of Truth.

The squares side of the mat, with a few random Piecepack pieces: the tiles take almost exactly 2 × 2 squares, and the coins fit inside each square with room to pick them up.

After a while I measured things out and saw that I could squeeze a 6.5 × 7 hexagon pattern into something resembling a square that was a multiple of the 2.5 cm square on the back of my Piecepack tiles, and decided to go for another Standard for the back (because of course I wasn’t going to buy new fabric for lining the work).

I kept the paper in the pieces until both sides were ready, and used it to sew them right sides together, leaving the usual opening in the middle of one side.

Then I pressed, removed the paper, turned everything inside out, pressed again and. It worked!

The hexagon side of the mat, with a set of polyhedral dice.

The hexagons look like hexagons, the squares look like squares, the whole thing feels soft and drapey, but structurally sound. And it’s a bit lumpy, but not enough to cause issues when using it as a soft surface to put over a noisy wooden table to throw dice on.

I considered adding some lightweight batting in the middle, but there was really no need for it, and wondered about how to quilt the piece in a way that worked with the patterns on the two sides, but for something this small it wasn’t really required.

However, I decided to add a buttonhole stitch border on all edges, to close the opening I had left and to reinforce especially the small triangles on the hexagons side, as those had a smaller sewing allowance and could use it.

The squares sides of the mat, with some blue and purple stones  in the starting position for a hnefatafl game.

And of course, the 11 × 11 squares side wasn’t completely an accident, but part of A Plan.

For this project there isn’t really a pattern, but I did publish the files I used to print the paper pieces even if they were pretty trivial.

And there are more polo shirts in that pile, and while they won’t be suitable for anything complex, maybe I could try some rhombs, or even kites and darts?

03 September, 2025 12:00AM

September 02, 2025

Debian Outreach Team

Spaarsh Gsoc Report

– layout: post title: “GSoC 2025 Report: Enhancing Debian packages with ROCm GPU acceleration” date: 2025-09-01 categories: gsoc debian ROCm debian-packaging author: Spaarsh Thakkar —

GitLab Salsa: @Spaarsh

GitHub: Spaarsh

Introduction

I am Spaarsh Thakkar, a final-year Computer Science Engineering undergrad from India. My interests lie in research and systems. My recent work has been in and around Graphics Processing Units and I also hold a keen interest in Computer Networks. At the time of writing, I have been an open-source contributor for almost a year.

Proposal Description (as shown on GSoC Project Profile1)

Due to Debian’s open-source nature, no Debian package in main can have a proprietary GPU package listed as a dependency. While AI and HPC workloads increasingly rely on GPU acceleration, many Debian packages still focus solely on CUDA, which is proprietary.

With the advent of ROCm, an open-source GPU computing platform, we can now integrate full-fledged AMD GPU support into Debian packages. This will improve the experience of developers working in AI/ML and HPC while positioning Debian as a strong OS choice for GPU-driven workloads. The proposal aims to aid in solving the aforementioned program by packaging several ROCm packages for debian and add ROCm support to some existing debian packages.

The deliverables are as follows:

  1. New Debian packages with GPU support
  2. Enhanced GPU support within existing Debian packages
  3. More autopackagetests running on the Debian ROCm CI

Key Objectives

Enable ROCm in:

  1. dbcsr
  2. gloo
  3. cp2k

Publish the following packages to debian apt archive:

  1. hipblas-common
  2. hipBLASlt

Work Report

1. Publishing hipblas-common to apt

This objective was successfully completed, resulting in hipblas-common being published in the apt repository2.

The process involved the following steps:

  1. Filing a Intent-To-Package (ITP)3
  2. Pulling the upstream source code repository from GitHub
  3. Adding the debian/ packaging files
  4. Testing the package locally
  5. Creating the corresponding project under rocm-team4
  6. Applying the necessary changes
  7. Building the package
  8. Testing it using sbuild
  9. Signing the package files
  10. Uploading the package to the mentors.debian.net archive(now in official archive)5
  11. Addressing review feedback and making changes
  12. Requesting sponsorship6
  13. Securing sponsorship, which led to the package being accepted into the experimental branch of apt

Since the beginning of GSoC, the package has also been promoted to the unstable branch2.


2. DBCSR ROCm and Multi-Arch Support

During my GSoC project, I worked on extending the DBCSR (Distributed Block Compressed Sparse Row)7 package to improve its ROCm/HIP support, and handling multi-architecture GPU kernels in a way that is both practical for upstream maintainers and debian package developers.

The code changes can be found at my dbcsr fork here8.

ROCm/HIP Enablement

  • Enabled ROCm backend support to DBCSR, allowing GPU acceleration beyond CUDA by enabling HIP-based builds.
  • Investigated and resolved build issues specific to HIP kernels within DBCSR.

Multi-Architecture GPU Kernel Handling

(The following content was presented in greater detail at DebConf’25 as well. The presentation video can be found here9 and the presentation slide can be found here10).

  • DBCSR contains GPU kernels that are heavily optimized for specific architectures. By default, these are built for a single target architecture, which poses challenges for packaging where binaries need to support multiple possible GPU targets.
  • Explored different strategies for solving the multi-arch GPU kernel distribution problem, including:

    • Option 1: Fat binaries (embedding multiple GPU architectures into a single binary, with runtime dispatch). This is ideal for end-users but requires deeper changes upstream and is not straightforward with HIP/ROCm.
    • Option 2: Arch-specific libraries (e.g., libdbcsr.gfxXXX.a), where the alternatives system or explicit user selection would determine which one is used. This solves the problem but pushes complexity downstream into packaging and user configuration.
    • Option 3: Prefixed functions inside a single file, where kernels are compiled separately per architecture, functions are renamed with an arch prefix, and runtime logic in DBCSR decides which kernel to invoke. This shifts complexity upstream but could give a clean downstream experience.
  • I critically analyzed these options in the context of Debian packaging and upstream maintainability. Arch-specific .a files introduce exponential dependency complexity. The prefixed-function approach seemed like a plausible way forward, though it requires upstream buy-in.
  • After consulting with my mentor, these concerns were raised in the dbcsr repository as a discussion here11 

Summary

My work involved:

  • Enabling HIP/ROCm support in DBCSR.

  • Prototyping strategies for handling GPU multi-arch builds.

  • Evaluating the trade-offs between upstream maintainability and downstream packaging complexity.


3. gloo, hipification and source code issues

One of the other packages that were targeted was gloo12. It is a collective communications library and has the implementations of different Machine Learning communication algorithms.

The code changes can be found at my gloo fork here13 (some changes have not be committed at the time of writing).

HIP/ROCm Enablement

  1. Fixing old ROCm CMake functions The upstream Gloo codebase still used old ROCm CMake functions that began with the hip_ prefix (for example, hip_add_executable). These functions have since been deprecated/removed. I updated the build system to use the modern ROCm CMake equivalents so that the package can build properly in a current ROCm environment.

  2. Debian packaging changes I modified debian/control to add a new package, libgloo-rocm, in addition to the existing packages. This allows proper separation and handling of ROCm-enabled builds in Debian.

  3. First successful library build After these changes, I was able to successfully build the library. However, I ran into issues when trying to produce the shared library: there were undefined symbol errors at link time.

Source Code Issue

On investigating the undefined symbol errors, I identified that these came from a lack of explicit template instantiation for some Gloo classes. Since C++ templates only get compiled when explicitly used or instantiated, this resulted in missing symbols in the shared library.

To solve this, I explored the source code and noticed that the HIP backend code was not natively written — it was generated from the CUDA backend using a custom hipification script maintained by the repo.

  • I experimented with modifying the HIPification process itself, trying out hipify-perl14 instead of the repository’s custom Python script.
  • I also tried tweaking the source code in places where template instantiations were missing, so that the ROCm build would correctly export the needed symbols.

Summary

The issue is still unresolved. The core problem lies in how the source code is structured: the HIP backend is almost entirely auto-generated from CUDA code, and the process does not handle template instantiations correctly. Because of this, the Debian package for Gloo with ROCm support is not yet ready for release, and further source-level fixes are required to make the ROCm build reliable.


4. cp2k

CP2K15 is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems.

HIP/ROCm Enablement

cp2k depends on dbcsr and hence, HIP/ROCm enablement in this package required the dbcsr16 package to get through.

Even though dbcsr isn’t ready yet, it was worthwhile to plan how it shall be built with HIP/ROCm once we have dbcsr in place. Upon doing this, it was realized that the architecture-wise libraries provided by the dbcsr package will result in a complicated building process for cp2k.

No changes have been made to this package yet and more concrete steps shall be taken once the dbcsr package work is completed.

Summary

The multi-arch build process for cp2k maybe complicated by the one static-library-per-architecture method used in the dependent package, dbcsr.


Auxiliary Work & Activities

While working on the aformentioned GSoC Goals, there were a few other things that were also done.

  1. libamdhip64-dev bug file17

    While trying to enable HIP/ROCm in dbcsr, CMakeDetermineHIPCompiler.cmake was unable to find HIP runtime CMake package. After going through some similar issues faced by other developers earlier, it was decided to file a bug report under the libamdhip64-dev package.

    After discussions with and trying the changes suggested by Cory (my mentor) under the bug, the issue was resolved.

    Turns out, the wrong compiler was being used by me! The gcc compiler was supposed to be used and I was using hipcc. The bug was closed since it was not due an issue with the package.

    Cory suggested that I add this info under the ROCm wiki page. It is yet to be done and hopefully I get it done soon.

  2. DebConf25 Talk

    After facing the multi-arch build dilemma with dbcsr (and also getting to know about the issues faced by other fellow package developers), I came to realise that this was more than a packaging, build or programming issue. GPU-packaging was facing a policy issue.

    Hence, I decided to cover this problem in greater detail at my DebConf25 Virtual Presentation under the Outreach Session.

    Shoutout to Cory for his support and Lucas Kanashiro for encouraging me to present my work!

  3. Bi-Weekly AMD ROCm Meetings

    Shortly after the Coding period started, Cory began the initiative of Bi-Weekly AMD ROCm Meetings18. Being a part of the meetings (participated in all but one!), seeing the work the other folks are doing and being able to discuss my own problems was a delight.

  4. (Upcoming) IndiaFOSS 2025 Talk

    After understanding the nuances and beauty of the debian packaging ecosystem in these months, I decided to spread the work about debian packaging and packaging software in general. My talk19 for the same got accepted in the upcoming IndiaFOSS 202520 conference!

    I hope this beings more people towards the packaging ecosystem and to the debian developer ecosystem.

Conclusion

My GSoC time was fantastic! I plan to complete the work that I have started during my GSoC and beyond. Working with Cory21 and Utkarsh22 (a fellow GSoC’25 contributor under Cory) has been a very positive experience.

HIP/ROCm GPU-packaging is in a nascent stage. It is an exciting time to be in this space right now. The problems are new and never encountered before (CPU packaging isn’t architecture specific!). The problems were shall face in the coming time, and our solutions to them will set a precendent for the future.

References

1 : https://summerofcode.withgoogle.com/programs/2025/projects/9s4jUjV0

2 : https://tracker.debian.org/pkg/hipblas-common

3 : https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1105114

4 : https://salsa.debian.org/rocm-team

5 : https://packages.debian.org/source/sid/hipblas-common

6 : https://lists.debian.org/debian-ai/2025/05/msg00088.html

7 : https://www.cp2k.org/dbcsr

8 : https://salsa.debian.org/Spaarsh/dbcsr/

9 : https://drive.google.com/file/d/14WQuTMcI-L0lbi3zkUc9pT6RGwwVY0j1/view?usp=sharing

10 : https://docs.google.com/presentation/d/1p-nkHPgg5C5jKGy7ySZ8rts5G2vNFQpQJQ8UySOWgVE

11 : https://github.com/cp2k/dbcsr/discussions/933

12 : https://github.com/pytorch/gloo

13 : https://salsa.debian.org/Spaarsh/gloo

14 : https://tracker.debian.org/pkg/hipify

15 : https://www.cp2k.org/

16 : https://tracker.debian.org/pkg/dbcsr

17 : https://bugs.debian.org/cgi-bin/bugreport.cgi?https://fossunited.org/indiafoss/2025bug=1108159

18 : https://lists.debian.org/debian-ai/2025/05/msg00113.html

19 : https://fossunited.org/c/indiafoss/2025/cfp/dpq0b26ece

20 : https://fossunited.org/indiafoss/2025

21 : https://salsa.debian.org/cgmb

22 : https://salsa.debian.org/utk4r-sh

02 September, 2025 01:32PM by Outreach team

hackergotchi for Jonathan Dowland

Jonathan Dowland

Luminal and Lateral

For my birthday I was gifted copies of Eno's last two albums, Luminal and Lateral, both of which are collaborations with Beatie Wolfe.

Luminal and Lateral records in the sunshine

Let's start with the art. I love this semi-minimalist, bold style, and how the LP itself (in their coloured, bio-vinyl variants) feels like it's part of the artwork. I like the way the artist credits mirror each other: Wolfe, Eno for Luminal; Eno, Wolfe for Lateral.

My first "bio vinyl" LP was the Cure's last one, last year. Ahead of it arriving I planned to blog about it, but when it came arrived it turned out I had nothing interesting to say. In terms of how it feels, or sounds, it's basically the same as the traditional vinyl formulation.

The attraction of bio-vinyl to well-known environmentalists like Eno (and I guess, the Cure) is the reduced environmental impact due to changing out the petroleum and other ingredients with recycled used cooking oil. You can read more about bio-vinyl if you wish. I try not to be too cynical about things like this; my immediate response is to assume some kind of green-washing PR campaign (I'm currently reading Consumed by Saabira Chaudhuri, an excellent book that is not sadly only fuelling my cynicism) but I know Eno in particular takes this stuff seriously and has likely done more than a surface-level evaluation. So perhaps every little helps.

On to the music. The first few cuts I heard from the albums earlier in the year didn't inspire me much. Possibly I heard something from Luminal, the vocal album; and I'm generally more drawn to Eno's ambient work. (Lateral is ambient instrumental). I was not otherwise familiar with Beatie Wolfe. On returning to the albums months later, I found them more compelling. Luminal reminds me a little of Apollo: Atmospheres and Soundtracks. Lateral worked well as space music for phd-correction sessions.

The pair recently announced a third album, Liminal, to arrive in October, and totally throw off the symmetry of the first two. Two of its tracks are available to stream now in the usual places.

02 September, 2025 11:23AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

September.

September. Kids Summer Vacation is over and things are starting to come back to the usual rhythm.

02 September, 2025 12:30AM by Junichi Uekawa

Charles

Making KGB less noisy

This past month I did setup KGB to send notifications to #debian-lts when new merge requests were created in the LTS website’s repo and I learned a couple cool things. I’ve been trying to document things more so I don’t have to research the same topic months later, hence the blog seemed like a good idea, specially since many debianites have KGB set on their favorite IRC channel and this post will go to planet.debian.org.

Selecting What Goes to IRC

Salsa (Debian’s GitLab instance) can generate a lot of events for things that happen on a repository and a lot of them can be pushed to KGB via webhooks. Generally I prefer a minimal set enabled otherwise it’s too much clutter on the IRC side, but it’s important to go through each option to see what makes sense or not. From the experience I had, the following ones are the most useful to have it on:

  • Push events
  • Tag push events
  • Comments
  • Issue events
  • Merge request events
  • Pipeline events

Reducing the Noise

For Debian packaging, one may find it useful to add a pattern filter so only the packaging branch updates go to IRC. If you are using DEP-14, that’s pretty easy, “debian/*” will do the job.

Notably, “Job events” are left out. Basically it’s just too much info, you get one alert when a job is scheduled, then when it’s started and another one when it’s completed. Well, each pipeline has at least a few of them, multiply by three and you can understand my point.

Besides that, pipelines also generate the same amount of events as jobs, so it might be a problem too. Well, KGB comes to the rescue. It allows you to filter pipeline events, because you really only care about the pipeline when it fails ;-) To do just that, pipeline_only_status=failed.

Another interesting option is limiting the commits shown when the push event has too many of them. One can do that with squash_threshold=3. Remember I want less clutter?! Three commits is my limit here.

Final Result

The final URL for me looks like this (newlines added for clarity):

http://kgb.debian.net:9418/webhook/?channel=debian-<your_preferred_channel>&
                                    network=oftc&
                                    private=1&
                                    use_color=1&
                                    use_irc_notices=1&
                                    squash_threshold=3&
                                    pipeline_only_status=failed

You can see there are more options than the ones I described earlier, well, now it’s your time to go through KGB’s documentation and learn a thing or two ;-)

02 September, 2025 12:18AM

September 01, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities August 2025

Another short status update of what happened on my side last month. Released Phosh 0.49.0 and added some more QoL improvements to Phosh Mobile stack (e.g. around Cell broadcasts). Also pulled my SHIFT6mq out of the drawer (where it was sitting in a drawwer far too long) and got it to show a picture after a small driver fix. Thanks to the work the sdm845-mainlining folks are doing that was all that was needed. If I can get touch to work better that would be another nice device for demoing Phosh.

See below for details on the above and more:

phosh

  • Allow to auto-start pomodoro timer (MR)
  • Improve mpris player thumbnails (MR)
  • Cellbroadcast fixes (MR)
  • Release (MR)
  • searchd related build system fixes (MR)
  • gchar vs char cleanup (MR)
  • upcoming-events: Add filter icons (MR)
  • Fix missing header dependency (MR)
  • Release 0.49~rc1, 0.49.0
  • Fix some incorrect callback signatures (MR)

phoc

  • Workspace indicators (MR)
  • Don't overwrite picked output (MR)
  • Release (0.49~rc1, 0.49.0
  • Raise nofile rlimit (MR)
  • Fix gettings tarted page title (MR)
  • Update cursor when layer surface moves away from under the cursor (MR)
  • Support cursor-shape-v1 protocol (MR)
  • pointer: Use libinput's LIBINPUT_CONFIG_DRAG_LOCK_ENABLED_STICKY: (MR)

phosh-mobile-settings

  • Cellbroadcast fixes (MR)
  • build: Link statically against libcellbroadcast subproject (MR)
  • Release 0.49~rc1, 0.49.0

stevia (formerly phosh-osk-stub)

  • Release 0.49~rc1, 0.49.0
  • Fix emoji matching on big endian (MR)
  • Fix emoji matching again aftr switching to GTK's embedded emoji data (MR)
  • Fix scaling when adding new layouts (MR)
  • Improve character popover and other fixes (MR)

xdg-desktop-portal-phosh

pfs

  • Let pressing <enter> save the file (MR)

feedbackd

  • Release 0.8.4
  • Fix important override. (MR)

feedbackd-device-themes

  • Release 0.8.5
  • Lower status LED brightness on sargo (MR)

libcmatrix

  • Track room version (MR)

Chatty

  • Warning fixes (MR)
  • matrix: Show room version (MR)

Debian

Cellbroadcastd

  • Fix daemon systemd target (MR)
  • Ignore case when matching country when looking up channels (MR)
  • Meson dependency fix (MR)

ModemManager

  • Fix two country codes (MR)

gnome-clocks

  • Fix sporadic wakeup due to not diposed timers (MR) also resulting in a vala issue

git-buildpackage

  • Move test data to salsa and fetch it from there: deb, rpm, MR
  • clone: Be less strict on vcs-git URLs (MR)

mobian-recipies

  • Fail early without an ssh key (MR)

Linux

  • Shift6mq: Fix clock frequency of panel driver (MR)
  • Shift6mq: Set chassis type (MR)
  • Shift6mq: Tried to improve the touch driver to increase the sensitivity / sample rate not success yet.

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh/upcoming-events: Allow to filter out empty days in (MR)
  • phosh: keypad and search bar CSS improvements (MR)
  • p-m-s: Tweaks definition parsing code (MR)
  • p-m-s: osk-shortcuts: UI tweaks (MR)
  • p-m-s: Add gchar check (MR)
  • p-m-s: Make it a search provider (MR)
  • phoc: toplevel-addons (MR)
  • debian: MM stable update (MR)
  • stevia: Use default font (MR)
  • upcoming-events: Use filtered list model (MR)
  • pms: Tweaks rename (MR)
  • pms: Clang build fix (MR)
  • feedbackd: Udev rule for AW86927 (FP5) (MR)
  • xdpm: Allow pure rust build for using in xdg-d-p-phrosh (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 September, 2025 06:05AM

Birger Schacht

Status update, August 2025

Due to the freeze I did not do that many uploads in the last few months, so there were various new releases I packaged once Trixie was released. Regarding the release of Debian 13, Trixie, I wrote a small summary of the changes in my packages.

I uploaded an unreleased version of cage to experimental, to prepare for the transition to wlroots-0.19. Both sway and labwc already had packages in experimental that depended on the new wlroots version. When the transition happened, I uploaded the cage version to unstable, as well as labwc 0.9.1 and sway 1.11.

I updated

  • foot to 1.23.1
  • waybar to 0.14.0
  • swaylock to 1.8.3
  • git-quick-stats to 2.7.0
  • swayimg to 4.5
  • usbguard to 1.1.4
  • fcft to 3.3.2
  • fnott to 1.8.0
  • wdisplays to 1.1.3
  • wev to 1.1.0
  • wlopm to 1.0.0
  • wmenu to 0.2.0
  • libsfdo to 0.1.4

Most of the packages I uploaded using git-debpush, some of them could not be uploaded this way due to upstream using git submodules (this is 1107219). I also created 1112040 - git-debpush: should also say which tag it created and 1111504 - git-debpush: pristine-tar check warns about pristine-tar data thats not present (which is already fixed).

I uploaded wayback 0.2 to NEW, where it is waiting for review, (ITP).

In my dayjob I added extended the place lookup form of apis-core-rdf to allow searching places and selecting them on a map using leaflet and the nominatim API. Another issue I worked on was about highlighting those inputs of our generic list filter that are used to filter the results. I released a couple of bugfix releases for the v0.50 release, then v0.51 and two bugfix releases and then v0.52 and another couple of bugfix releases. v0.53 will land in a couple of days. I also released v0.6.2 of apis-highlighter-ng, which is sort of a plugin for apis-core-rdf, that allows to highlight parts of a text and link them to whatever Django object (in our case relations).

01 September, 2025 05:28AM

Russ Allbery

Review: Regenesis

Review: Regenesis, by C.J. Cherryh

Series: Cyteen #2
Publisher: DAW
Copyright: January 2009
ISBN: 0-7564-0592-0
Format: Mass market
Pages: 682

The main text below is an edited version of my original review of Regensis written on 2012-12-21. Additional comments from my re-read are after the original review.

Regenesis is a direct sequel to Cyteen, picking up very shortly after the end of that book and featuring all of the same characters. It would be absolutely pointless to read this book without first reading Cyteen; all of the emotional resonance and world-building that make Regensis work are done there, and you will almost certainly know whether you want to read it after reading the first book. Besides, Cyteen is one of the best SF novels ever written and not the novel to skip.

Because this is such a direct sequel, it's impossible to provide a good description of Regenesis without spoiling at least characters and general plot developments from Cyteen. So stop reading here if you've not yet read the previous book.

I've had this book for a while, and re-read Cyteen in anticipation of reading it, but I've been nervous about it. One of the best parts of Cyteen is that Cherryh didn't belabor the ending, and I wasn't sure what part of the plot could be reasonably extended. Making me more nervous was the back-cover text that framed the novel as an investigation of who actually killed the first Ari, a question that was fairly firmly in the past by the end of Cyteen and that neither I nor the characters had much interest in answering. Cyteen was also a magical blend of sympathetic characters, taut tension, complex plotting, and wonderful catharsis, the sort of lightning in a bottle that can rarely be caught twice.

I need not have worried. If someone had told me that Regenesis was another 700 pages of my favorite section of Cyteen, I would have been dubious. But that's exactly what it is. And the characters only care about Ari's murderer because it comes up, fairly late in the novel, as a clue in another problem.

Ari and Justin are back in the safe laboratory environment of Reseune, safe now that politics are not trying to kill or control them. Yanni has taken over administration. There is a general truce, and even some deeper agreement. Everyone can take a breath and relax, albeit with the presence of Justin's father Jordan as an ongoing irritant. But broader Union politics are not stable: there is an election in progress for the Defense councilor that may break the tenuous majority in favor of Reseune and the Science Directorate, and Yanni is working out a compromise to gain more support by turning a terraforming project loose on a remote world. As the election and the politics heat up, interpersonal relationships abruptly deteriorate, tensions with Jordan sharply worsen, and there may be moles in Reseune's iron-clad security. Navigating the crisis while keeping her chosen family safe will once again tax all of Ari's abilities.

The third section of Cyteen, where Ari finally has the tools to take fate into her own hands and starts playing everyone off against each other, is one of my favorite sections of any book. If it was yours as well, Regenesis is another 700 pages of exactly that. As an extension and revisiting, it does lose a bit of immediacy and surprise from the original. Regenesis is also less concerned with the larger questions of azi society, the nature of thought and personality, loyalty and authority, and the best model for the development of human civilization. It's more of a political thriller. But it's a political thriller that recaptures much of the drama and tension of Cyteen and is full of exceptionally smart and paranoid people thinking through all angles of a problem, working fast on their feet, and successfully navigating tricky and treacherous political landscapes.

And, like Cyteen but unlike others of Cherryh's novels I've read, it's a novel about empowerment, about seizing control of one's surroundings and effectively using all of the capability and leverage at one's fingertips. That gives it a catharsis that's almost as good as Cyteen.

It's also, like its predecessor, a surprisingly authoritarian novel. I think it's in that, more than anything else in these books, that one sees the impact of the azi. Regenesis makes it clear that the story is set, not in a typical society, but inside a sort of corporation, with an essentially hierarchical governance structure. There are other SF novels set within corporations (Solitaire comes to mind), but normally they follow peons or at best mid-level personnel or field agents, or otherwise take the viewpoint of the employees or the exploited. When they follow the corporate leaders, the focus usually isn't down inside the organization, but out into the world, with the corporation as silent resources on which the protagonist can draw.

Regenesis is instead about the leadership. It's about decisions about the future of humanity that characters feel they can make undemocratically (in part because they or their predecessors have effectively engineered the opinions of the democratic population), but it's also about how one manages and secures a top-down organization. Reseune is, as in the previous novel, a paranoid's suspicions come true; everyone is out to get everyone else, or at least might be, and the level of omnipresent security and threat forces a close parsing of alliances and motivations that elevates loyalty to the greatest virtue.

In Cyteen, we had long enough with Ari to see the basic shape of her personality and her slight divergences from her predecessor, but her actions are mostly driven by necessity. Regenesis gives us more of a picture of what she's like when her actions aren't forced, and here I think Cherryh manages a masterpiece of subtle characterization. Ari has diverged substantially from her predecessor without always realizing, and those divergences are firmly grounded in the differences she found or created between her life and the first Ari's. She has friends, confidents, and a community, which combined with past trauma has made her fiercely, powerfully protective. It's that protective instinct that weaves the plot together. So many of the events of Cyteen and Regenesis are driven by people's varying reactions to trauma.

If you, like me, loved the last third of Cyteen, read this, because Regenesis is more of exactly that. Cherryh finds new politics, new challenges, and a new and original plot within the same world and with the same characters, but it has the same feel of maneuvering, analysis, and decisive action. You will, as with Cyteen have to be comfortable with pages of internal monologue from people thinking through all sides of a problem. If you didn't like that in the previous book, avoid this one; if you loved it, here's the sequel you didn't know you were waiting for.

Original rating: 9 out of 10


Some additional thoughts after re-reading Regenesis in 2025:

Cyteen mostly held up to a re-reading and I had fond memories of Regenesis and hoped that it would as well. Unfortunately, it did not. I think I can see the shape of what I enjoyed the first time I read it, but I apparently was in precisely the right mood for this specific type of political power fantasy.

I did at least say that you have to be comfortable with pages of internal monologue, but on re-reading, there was considerably more of that than I remembered and it was quite repetitive. Ari spends most of the book chasing her tail, going over and around and beside the same theories that she'd already considered and worrying over the nuances of every position. The last time around, I clearly enjoyed that; this time, I found it exhausting and not very well-written. The political maneuvering is not that deep; Ari just shows every minutia of her analysis.

Regenesis also has more about the big questions of how to design a society and the role of the azi than I had remembered, but I'm not sure those discussions reach any satisfying conclusions. The book puts a great deal of effort into trying to convince the reader that Ari is capable of designing sociological structures that will shape Union society for generations to come through, mostly, manipulation of azi programming (deep sets is the term used in the book). I didn't find this entirely convincing the first time around, and I was even less convinced in this re-read. Human societies are a wicked problem, and I don't find Cherryh's computer projections any more convincing than Asimov's psychohistory.

Related, I am surprised, in retrospect, that the authoritarian underpinnings of this book didn't bother me more on my first read. They were blatantly obvious on the second read. This felt like something Cherryh put into these books intentionally, and I think it's left intentionally ambiguous whether the reader is supposed to agree with Ari's goals and decisions, but I was much less in the mood on this re-read to read about Ari making blatantly authoritarian decisions about the future of society simply because she's smart and thinks she, unlike others, is acting ethically. I say this even though I like Ari and mostly enjoyed spending time in her head. But there is a deep fantasy of being able to reprogram society at play here that looks a lot nastier from the perspective of 2025 than apparently it did to me in 2012.

Florian and Catlin are still my favorite characters in the series, though. I find it oddly satisfying to read about truly competent bodyguards, although like all of the azi they sit in an (I think intentionally) disturbing space of ambiguity between androids and human slaves.

The somewhat too frank sexuality from Cyteen is still present in Regenesis, but I found it a bit less off-putting, mostly because everyone is older. The authoritarian bent is stronger, since Regenesis is the story of Ari consolidating power rather than the underdog power struggle of Cyteen, and I had less tolerance for it on this re-read.

The main problem with this book on re-read was that I bogged down about halfway through and found excuses to do other things rather than finish it. On the first read, I was apparently in precisely the right mood to read about Ari building a fortified home for all of her friends; this time, it felt like endless logistics and musings on interior decorating that didn't advance the plot. Similarly, Justin and Grant's slow absorption into Ari's orbit felt like a satisfying slow burn friendship in my previous reading and this time felt touchy and repetitive.

I was one of the few avid defenders of Regenesis the first time I read it, and sadly I've joined the general reaction on a re-read: This is not a very good book. It's too long, chases its own tail a bit too much, introduces a lot more authoritarianism and doesn't question it as directly as I wanted, and gets even deeper into Cherryh's invented pseudo-psychology than Cyteen. I have a high tolerance for the endless discussions of azi deep sets and human flux thinking, and even I got bored this time through.

On re-read, this book was nowhere near as good as I thought it was originally, and I would only recommend it to people who loved Cyteen and who really wanted a continuation of Ari's story, even if it is flabby and not as well-written. I have normally been keeping the rating of my first read of books, but I went back and lowered this one by two points to ensure it didn't show as high on my list of recommendations.

Re-read rating: 6 out of 10

01 September, 2025 04:41AM

Iustin Pop

Small PSA: git.k1024.org turndown

Just a small thing: I’m going to turn down the very simple gitweb interface at https://git.k1024.org/. Way back, I thought I should have a backup for GitHub, but the decentralised Git model makes this not really needed, and gitweb is actually pretty heavy, even if it is really bare-bones.

Practically, as small as that site was, it was fine before the LLM era. Since then, I keep getting lots of traffic, as if these repositories which already exist on GitHub hold critical training information… Thus, I finally got the impetus to turn it down, for no actual loss. Keeping it would make sense only if I were to change it into a proper forge, but that’s a different beast, in which I have no interest (as a public service). So, down it goes.

I’ll probably replace all of it with a single static page, text-only even 😄

Next in terms of simplification will probably be removing series from this blog, since there’s not enough clear separation between tags and series. Or at least, I’m not consequent enough to write a very clean set of articles that can be ordered and numbered as a unit.

01 September, 2025 12:08AM

August 31, 2025

Russell Coker

hackergotchi for Otto Kekäläinen

Otto Kekäläinen

Managing procrastination and distractions

Featured image of post Managing procrastination and distractions

I’ve noticed that procrastination and inability to be consistently productive at work has become quite common in recent years. This is clearly visible in younger people who have grown up with an endless stream of entertainment literally at their fingertips, on their mobile phone. It is however a trap one can escape from with a little bit of help.

Procrastination is natural — they say humans are lazy by nature after all. Probably all of us have had moments when we choose to postpone a task we know we should be working on, and instead spent our time doing secondary tasks (valorisation). Classic example is cleaning your apartment when you should be preparing for an exam. Some may procrastinate by not doing any work at all, and just watching YouTube videos or the like. To some people, typically those who are in their 20s and early in their career, procrastination can be a big challenge and finding the discipline to stick to planned work may need intentional extra effort, and perhaps even external help.

During my 20+ year career in software development I’ve been blessed to work with engineers of various backgrounds and each with their unique set of strengths. I have also helped many grow in various areas and overcome challenges, such as lack of intrinsic motivation and managing procrastination, and some might be able to get it in check with some simple advice.

Distance yourself from the digital distractions

The key to avoiding distractions and procrastination is to make it inconvenient enough that you rarely do it. If continuing to do work is easier than switching to procrastination, work is more likely to continue.

Tips to minimize digital distractions, listed in order of importance:

  1. Put your phone away. Just like when you go to a movie and turn off your phone for two hours, you can put the phone away completely when starting to work. Put the phone in a different room to ensure there is enough physical distance between you and the distraction, so it is impossible for you to just take a “quick peek”.
  2. Turn off notifications from apps. Don’t let the apps call you like sirens luring Odysseus. You don’t need to have all the notifications. You will see what the apps have when you eventually open them at a time you choose to use them.
  3. Remove or disable social media apps, games and the like from your phone and your computer. You can install them back when you have vacation. You can probably live without them for some time. If you can’t remove them, explore your phone’s screen time restriction features to limit your own access to apps that most often waste your time. These features are sometimes listed in the phone settings under “digital health”.
  4. Have a separate work computer and work phone. Having dedicated ones just for work that are void of all unnecessary temptations helps keep distance from the devices that could derail your focus.
  5. Listen to music. If you feel your brain needs a dose of dopamine to get you going, listening to music helps satisfy your brain’s cravings while still being able to simultaneously keep working.

Doing a full digital detox is probably not practical, or not sustainable for an extended time. One needs apps to stay in touch with friends and family, and staying current in software development probably requires spending some time reading news online and such. However the tips above can help contain the distractions and minimize the spontaneous attention the distractions get.

Some of the distractions may ironically be from the work itself, for example Slack notifications or new email notifications. I recommend turning them off for a couple of hours every day to have some distraction free time. It should be enough to check work mail a couple times a day. Checking them every hour probably does not add much overall value for the company unless your work is in sales or support where the main task itself is responding to emails.

Distraction free work environment

Following the same principle of distancing yourself from distractions, try to use a dedicated physical space for working. If you don’t have a spare room to dedicate to work, use a neighborhood café or sign up for a local co-working space or start commuting to the company office to find a space to be focused on work in.

Break down tasks into smaller steps

Sometimes people postpone tasks because they feel intimidated by the size or complexity of a task. In particular in software engineering problems may be vague and appear large until one reaches the breakthrough that brings the vision of how to tackle it. Breaking down problems into smaller more manageable pieces has many advantages in software engineering. Not only can it help with task-avoidance, but it can also make the problem easier to analyze, suggest solutions and test them and build a solid foundation to expand upon to ultimately later reach a full solution on the entire larger problem.

Working on big problems as a chain of smaller tasks may also offer more opportunities to celebrate success on completing each subtask and help getting in a suitable cadence of solving a single thing, taking a break and then tackling the next issue.

Breaking down a task into concrete steps may also help with getting more realistic time estimations. Sometimes procrastination isn’t real — someone could just be overly ambitious and feel bad about themselves for not doing an unrealistic amount of work.

Intrinsic motivation

Of course, you should follow your passion when possible. Strive to pick a career that you enjoy, and thus maximize the intrinsic motivation you experience. However, even a dream job is still a job. Nobody is ever paid to do whatever they want. Any work will include at least some tasks that feel like a chore or otherwise like something you would not do unless paid to.

Some would say that the definition of work itself is having to do things one would otherwise not do. You can only fully do whatever you want while on vacation or when you choose to not have a job at all. But if you have a job, you simply need to find the intrinsic motivation to do it.

Simply put, some tasks are just unpleasant or boring. Our natural inclination is to avoid them in favor of more enjoyable activities. For these situations we just have to find the discipline to force ourselves to do the tasks and figuratively speaking whip ourselves into being motivated to complete the tasks.

Extrinsic motivation

As the name implies, this is something people external to you need to provide, such as your employer or manager. If you have challenges in managing yourself and delivering results on a regular basis, somebody else needs to set goals and deadlines and keep you accountable for them. At the end of the day this means that eventually you will stop receiving salary or other payments unless you did your job.

Forcing people to do something isn’t nice, but eventually it needs to be done. It would not be fair for an employer to pay those who did their work the same salary as those who procrastinated and fell short on their tasks.

If you work solo, you can also simulate the extrinsic motivation by publicly announcing milestones and deadlines to build up pressure for yourself to meet them and avoid publicly humiliation. It is a well-studied and scientifically proven phenomenon that most university students procrastinate at the start of assignments, and truly start working on them only once the deadline is imminent.

External help for addictions

If procrastination is mainly due to a single distraction that is always on your mind, it may be a sign of an addiction. For example, constantly thinking about a computer game or staying up late playing a computer game, to the extent that it seriously affects your ability to work, may be a symptom of an addiction, and getting out of it may be easier with external help.

Discipline and structure

Most of the time procrastination is not due to an addiction, but simply due to lack of self-discipline and structure. The good thing is that those things can be learned. It is mostly a matter of getting into new habits, which most young software engineers pick up more or less automatically while working along the more senior ones.

Hopefully these tips can help you stay on track and ensure you do everything you are expected to do with clear focus, and on time!

31 August, 2025 12:00AM

August 30, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Bruteforcing pwgen passwords

I needed to bruteforce some passwords that I happened to know that were generated with the default mode (“pronouncable”) of pwgen, so I spent a fair amount of time writing software to help. It went through a whole lot of iterations and ended up being more efficient than I had ever assumed would be possible (although it's still nowhere near as efficient as it should ideally be). So now I'm sharing it with you. If you have IPv6 and can reach git.sesse.net, that is.

I'm pasting the entire README below. Remember to use it for ethical purposes.

Introduction
============

pwbrute creates all possible pwgen passwords (default tty settings, no -s).
It matches pwgen 2.08. It supports ordering them by most common first.
Note that pwgen before 2.07 also supported a special “non-tty mode”
that was even less secure (no digits, no uppercase) which is not supported here.

To get started, do

   g++ -std=c++20 -O2 -o pwbrute pwbrute.cc -ljemalloc
  ./pwbrute --raw --sort --expand --verbose > passwords.txt

wait for an hour or two and you're left with 276B passwords in order
(about 2.5TB). (You can run without -ljemalloc, but the glibc malloc
makes pwbrute take about 50% more time.)

pwbrute is not a finished, polished product. Do not expect this to be
suitable for inclusion in e.g. a Linux distribution.


A brief exposition of pwgen's security
======================================

pwgen is a program that is fairly widely used in Linux/UNIX systems
to generate “pronounceable” (and thus supposedly easier-to-remember)
passwords. On the surface of it, the default 8-letter passwords with
uppercase letters, lowercase letters and digits would have a password
space of

  62^8 = 218,340,105,584,896 ~= 47.63 bits

This isn't enough to save you from password cracking against fast hashes
(e.g. NTLM), but it's enough for almost everything else.

However, pwgen (without -s) does by design not use this entire space.
It builds passwords from a list of 40 “phonemes” (a, ae, ah, ai, b,
c, ch, ...) in sequence, with some rules of which can come after each
others (e.g. the combination f-g is disallowed, since any consonant
phoneme must be followed by a vowel or sometimes a digit), and sometimes
digits. Furthermore, some phonemes may be uppercased (only first letter,
in case of two-letter phonemes). In all, these restrictions mean that
the number of producable passwords drop to

  307,131,320,668 ~= 38.16 bits

Furthermore, if a password does not contain at least one uppercase letter
and one digit, it is rejected. This doesn't affect that many passwords,
but it's still down to

  276,612,845,450 ~= 38.00 bits

You would believe that this means that to get to a 50% chance of cracking
a password, you'd need to test about ~138 billon passwords; however, the
effective entropy is much, much worse than that:

First, consider that digits are inserted (at valid points) only with
30% probability, and phonemes are uppercased (at valid points) only
with 20% probability. This means that a password like “Ahdaiy7i” is
_much_ more likely than e.g. “EXuL8OhP” (five uppercase letters),
even though both are possible to generate.

Furthermore, when building up the password from left to right, every
letter is not equally likely -- every _phoneme_ is equally likely.
Since at any given point, (e.g.) “ai” is as likely as “a”, a lot fewer
rolls of the dice are required to get to eight letters if the password
contains many dipthongs (two-letter phonemes). This makes them vastly
overrepresented. E.g., the specific password “aechae0A” has three dipthongs
and a probability of about 1 in 12 million of being generated, while
“Oozaey7Y” has only two dipthongs (but an extra capital letter) and a
probability of about 1 in 9.33 _billion_!

In all, this means that to get to 50% probability of cracking a given
pwgen password (assuming you know that it was indeed generated with
pwgen, without -s), you need to test about 405 million passwords.
Note that pwgen gives out a list of passwords and lets the user choose,
which may make this easier or harder; I've had real-world single-password
cracks that fell after only ~400k attempts (~2% probability if the user
has chosen at random, but they most likely picked one that looked more
beautiful to them somehow).

This is all known; I reported the limited keyspace in 2004 (Debian bug
#276976), and Solar Designer reported the poor entropy in CVE-2013-4441.
(I discovered the entropy issues independently from them a couple of
months later, then discovered that it was already known, and didn't
publish.) However, to the best of my knowledge, pwbrute is the first
public program that will actually generate the most likely passwords
efficiently for you.

Needless to say, I cannot recommend using pwgen's phoneme-based
passwords for anything that needs to stay secure. (I will not make
concrete recommendations beyond that; a lot of literature exists
on the subject.)


Speeding up things
==================

Very few users would want the entire set of passwords, given that the
later ones are incredibly unlikely (e.g., AB0AB0AB has a chance of about
2^-52.155, or 1 in 5 quadrillion). To not get all, you can use e.g.
-c -40, which will produce only those with more than approx. 2^-40 probability
before final rejection (roughly ~6B passwords).

(NOTE: Since the calculated probability is before final rejection of those
without a digit or uppercase letter, they will not sum to 1, but something
less; approx. 0.386637 for the default 8-letter passwords, or 2^-1.3709.
Take this into account when reading all text below.)

pwbrute is fast but not super-fast; it can generate about 80M passwords/sec
(~700 MB/sec) to stdout, of course depending on your CPUs. The expansion phase
generally takes nearly all the time; if your cracker could somehow accept the
unexpanded patterns (i.e., without --expand) for free, pwbrute would basically
be infinitely fast. (It would be possible to microoptimize the expansion,
perhaps to 1B passwords/sec/core if pulling out all the stops, but at some point,
it starts becoming a problem related to pipe I/O performance, not candidate
generation.)

Thus, if your cracker is very fast (e.g. hashcat cracking NTLM), it's suboptimal
to try to limit yourself to only pwbrute-created passwords. It's much, much
faster to just create a bunch of legal prefixes and then let hashcat try all
of them, even though this will test some “impossible” passwords.
For instance:

  ./pwbrute --first-stage-len 5 --raw > start5.pwd
  ./hashcat.bin -O -m 1000 ntlm.pwd -w 3 -a 6 start5.pwd -1 '?l?u?d' '?1?1?1'

The “combination” mode in hashcat is also not always ideal; consider using
rules instead.

If you need longer passwords than 8 characters, you may want to split the job
into multiple parts. For this, you can combine --first-stage-len with --prefix
to generate passwords in two stages, e.g. first generate all valid 3-letter
prefixes (“bah” is valid, “bbh” is not) and then for each prefix generate
all possible passwords.  This requires much less RAM, can go in parallel,
and is pretty efficient.

For instance, this will create all passwords up to probability 2^-30,
over 16 cores, in a form that doesn't use too much RAM:

  ./pwbrute -f 3 -r -s -e | parallel -j 16 "./pwbrute -p {} -c -30 -s 2>/dev/null | zstd -6 > up-to-30-{}.pwd.zst"

You can then use the included merge.cc utility to merge the sorted files
into a new sorted one (this requires not using pwbrute --raw, since merge
wants the probabilities to merge correctly):

  g++ -O2 -o merge merge.cc -lzstd
  ./merge up-to-30-*.pwd.zst | pv | pzstd -6 > up-to-30.pwd.zst

merge is fairly fast, but not infinitely so. Sorry.

Beware, zstd uses some decompression buffers that can be pretty big per-file
and there are lots of files, so if you put the limit  lower than -30,
consider merging in multiple phases or giving -M to zstd, unless you want to
say hello to the OOM killer half-way into your merge.

As long as you give the --sort option to pwbrute, it is designed to give exactly
the same output in the same order every time (at the expense of a little bit of
speed during the pattern generation phase). This means that you can safely resume
an aborted generation or cracking job using the --skip=NUM flag, without worrying
that you'd lose some candidates.

Here are some estimated numbers for various probability cutoffs, and how much
of the probability space they cover (after correction for rejected passwords):

  p >= 2^-25:           78,000 passwords   (  0.00% coverage,   0.63% probability)
  p >= 2^-26:          171,200 passwords   (  0.00% coverage,   1.12% probability)
  p >= 2^-27:        3,427,100 passwords   (  0.00% coverage,   9.35% probability)
  p >= 2^-28:        5,205,200 passwords   (  0.00% coverage,  12.01% probability)
  p >= 2^-29:        8,588,250 passwords   (  0.00% coverage,  14.17% probability)
  p >= 2^-30:       24,576,550 passwords   (  0.01% coverage,  19.23% probability)
  p >= 2^-31:       75,155,930 passwords   (  0.03% coverage,  27.58% probability)
  p >= 2^-32:      284,778,250 passwords   (  0.10% coverage,  43.81% probability)
  p >= 2^-33:      540,418,450 passwords   (  0.20% coverage,  55.14% probability)
  p >= 2^-34:      808,534,920 passwords   (  0.29% coverage,  60.49% probability)
  p >= 2^-35:    1,363,264,200 passwords   (  0.49% coverage,  66.28% probability)
  p >= 2^-36:    2,534,422,340 passwords   (  0.92% coverage,  72.36% probability)
  p >= 2^-37:    5,663,431,890 passwords   (  2.05% coverage,  80.54% probability)
  p >= 2^-38:   11,178,389,760 passwords   (  4.04% coverage,  87.75% probability)
  p >= 2^-39:   16,747,555,070 passwords   (  6.05% coverage,  91.55% probability)
  p >= 2^-40:   25,139,913,440 passwords   (  9.09% coverage,  94.25% probability)
  p >= 2^-41:   34,801,107,110 passwords   ( 12.58% coverage,  95.91% probability)
  p >= 2^-42:   52,374,739,350 passwords   ( 18.93% coverage,  97.38% probability)
  p >= 2^-43:   78,278,619,550 passwords   ( 28.30% coverage,  98.51% probability)
  p >= 2^-44:  111,967,613,850 passwords   ( 40.48% coverage,  99.25% probability)
  p >= 2^-45:  147,452,759,450 passwords   ( 53.31% coverage,  99.64% probability)
  p >= 2^-46:  186,012,691,450 passwords   ( 67.25% coverage,  99.86% probability)
  p >= 2^-47:  215,059,885,450 passwords   ( 77.75% coverage,  99.94% probability)
  p >= 2^-48:  242,726,285,450 passwords   ( 87.75% coverage,  99.98% probability)
  p >= 2^-49:  257,536,845,450 passwords   ( 93.10% coverage,  99.99% probability)
  p >= 2^-50:  268,815,845,450 passwords   ( 97.18% coverage, 100.00% probability)
  p >= 2^-51:  273,562,845,450 passwords   ( 98.90% coverage, 100.00% probability)
  p >= 2^-52:  275,712,845,450 passwords   ( 99.67% coverage, 100.00% probability)
  p >= 2^-53:  276,512,845,450 passwords   ( 99.96% coverage, 100.00% probability)
         all:  276,612,845,450 passwords   (100.00% coverage, 100.00% probability)


License
=======

pwbrute is Copyright (C) 2025 Steinar H. Gunderson.

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

30 August, 2025 09:00AM

August 29, 2025

Ravi Dwivedi

Installing Debian With Btrfs and Encryption

Motivation

On the 8th of August 2025 (a day before the Debian Trixie release), I was upgrading my personal laptop from Debian Bookworm to Trixie. It was a major update. However, the update didn’t go smoothly, and I ran into some errors. From the Debian support IRC channel, I got to know that it would be best if I removed the texlive packages.

However, it was not so easy to just remove texlive with a simple apt remove command. I had to remove the texlive packages from /usr/bin. Then I ran into other errors. Hours after I started the upgrade, I realized I preferred having my system as it was before, as I had to travel to Noida the next day. Needless to say, I wanted to go to sleep rather than fix my broken system. Only if I had a way to go back to my system before I started upgrading, it would have saved a lot of trouble for me. I ended up installing Trixie from scratch.

It turns out that there was a way to recover to the state before the upgrade - using Timeshift to roll back the system to a state (in our example, it is the state before the upgrade process started) in the past. However, it needs the Btrfs filesystem with appropriate subvolumes, not provided by Debian installer in their guided partitioning menu.

I have set it up after a few weeks of the above-mentioned incident. Let me demonstrate how it works.

Check the screenshot above. It shows a list of snapshots made by Timeshift. Some of them were made by me manually. Others were made by Timeshift automatically as per the routine - I have set up hourly backups and weekly backups etc.

In the above-mentioned major update, I could have just taken a snapshot using Timeshift before performing the upgrade and could have rolled back to that snapshot when I found that I cannot spend more time on fixing my installation errors. Then I could just perform the upgrade later.

Installation

In this tutorial, I will cover how I installed Debian with Btrfs and disk encryption, along with creating subvolumes @ for root and @home for /home so that I can use Timeshift to create snapshots. These snapshots are kept on the same disk where Debian is installed, and the use-case is to roll back to a working system in case I mess up something or to recover an accidentally deleted file.

I went through countless tutorials on the Internet, but I didn’t find a single tutorial covering both the disk encryption and the above-mentioned subvolumes (on Debian). Debian doesn’t create the desired subvolumes by default, therefore the process requires some manual steps, which beginners may not be comfortable performing. Beginners can try distros such as Fedora and Linux Mint, as their installation includes Btrfs with the required subvolumes.

Furthermore, it is pertinent to note that I used Debian Trixie’s DVD iso on a real laptop (not a virtual machine) for my installation. Debian Trixie is the codename for the current stable version of Debian. Then I took screenshots in a virtual machine by repeating the process. Moreover, a couple of screenshots are from the installation I did on the real laptop.

Let’s start the tutorial by booting up the Debian installer.

The above screenshot shows the first screen we see on the installer. Since we want to choose Expert Install, we select Advanced Options in the screenshot above.

Let’s select the Expert Install option in the above screenshot. It is because we want to create subvolumes after the installer is done with the partition, and only then proceed to installing the base system. “Non-expert” install modes proceed directly to installing the system right after creating partitions without pausing for us to create the subvolumes.

After selecting the Expert Install option, you will get the screen above. I will skip to partitioning from here and leave the intermediate steps such as choosing language, region, connecting to Wi-Fi, etc. For your reference, I did create the root user.

Let’s jump right to the partitioning step. Select the Partition disks option from the menu as shown above.

Choose Manual.

Select your disk where you would like to install Debian.

Select Yes when asked for creating a new partition.

I chose the msdos option as I am not using UEFI. If you are using UEFI, then you need to choose the gpt option. Also, your steps will (slightly) differ from mine if you are using UEFI. In that case, you can watch this video by the YouTube channel EF Linux in which he creates an EFI partition. As he doesn’t cover disk encryption, you can continue reading this post after following the steps corresponding to EFI.

Select the free space option as shown above.

Choose Create a new partition.

I chose the partition size to be 1 GB.

Choose Primary.

Choose Beginning.

Now, I got to this screen.

I changed mount point to /boot and turned on the bootable flag and then selected “Done setting up the partition.”

Now select free space.

Choose the Create a new partition option.

I made the partition size equal to the remaining space on my disk. I do not intend to create a swap partition, so I do not need more space.

Select Primary.

Select the Use as option to change its value.

Select “physical volume for encryption.”

Select Done setting up the partition.

Now select “Configure encrypted volumes.”

Select Yes.

Select Finish.

Selecting Yes will take a lot of time to erase the data. Therefore, I would say if you have hours for this step (in case your SSD is like 1 TB), then I would recommend selecting “Yes.” Otherwise, you could select “No” and compromise on the quality of encryption.

After this, you will be asked to enter a passphrase for disk encryption and confirm it. Please do so. I forgot to take the screenshot for that step.

Now select that encrypted volume as shown in the screenshot above.

Here we will change a couple of options which will be shown in the next screenshot.

In the Use as menu, select “btrfs journaling file system.”

Now, click on the mount point option.

Change it to “/ - the root file system.”

Select Done setting up the partition.

This is a preview of the paritioning after performing the above-mentioned steps.

If everything is okay, proceed with the Finish partitioning and write changes to disk option.

The installer is reminding us to create a swap partition. I proceeded without it as I planned to add swap after the installation.

If everything looks fine, choose “yes” for writing the changes to disks.

Now we are done with partitioning and we are shown the screen in the screenshot above. If we had not selected the Expert Install option, the installer would have proceeded to install the base system without asking us.

However, we want to create subvolumes before proceeding to install the base system. This is the reason we chose Expert Install.

Now press Ctrl + F2.

You will see the screen as in the above screenshot. It says “Please press Enter to activate this console.” So, let’s press Enter.

After pressing Enter, we see the above screen.

The screenshot above shows the steps I performed in the console. I followed the already mentioned video by EF Linux for this part and adapted it to my situation (he doesn’t encrypt the disk in his tutorial).

First we run df -h to have a look at how our disk is partitioned. In my case, the output was:

# df -h
Filesystem              Size  Used  Avail   Use% Mounted on
tmpfs                   1.6G  344.0K  1.6G    0% /run
devtmpfs                7.7G       0  7.7G   0% /dev
/dev/sdb1               3.7G    3.7G    0   100% /cdrom
/dev/mapper/sda2_crypt  952.9G  5.8G  950.9G  0% /target
/dev/sda1               919.7M  260.0K  855.8M  0% /target/boot

df -h shows us that /dev/mapper/sda2_crypt and /dev/sda1 are mounted on /target and /target/boot respectively.

Let’s unmount them. For that, we run:

# umount /target
# umount /target/boot

Next, let’s mount our root filesystem to /mnt.

# mount /dev/mapper/sda2_crypt /mnt

Let’s go into the /mnt directory.

# cd /mnt

Upon listing the contents of this directory, we get:

/mnt # ls
@rootfs

Debian installer has created a subvolume @rootfs automatically. However, we need the subvolumes to be @ and @home. Therefore, let’s rename the @rootfs subvolume to @.

/mnt # mv @rootfs @

Listing the contents of the directory again, we get:

/mnt # ls
@

We only one subvolume right now. Therefore, let us go ahead and create another subvolume @home.

/mnt # btrfs subvolume create @home
Create subvolume './@home'

If we perform ls now, we will see there are two subvolumes:

/mnt # ls
@ @home

Let us mount /dev/mapper/sda2_crypt to /target

/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@ /dev/mapper/sda2_crypt /target/

Now we need to create a directory for /home.

/mnt # mkdir /target/home/

Now we mount the /home directory with subvol=@home option.

/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@home /dev/mapper/sda2_crypt /target/home/

Now mount /dev/sda1 to /target/boot.

/mnt # mount /dev/sda1 /target/boot/

Now we need to add these options to the fstab file, which is located at /target/etc/fstab. Unfortunately, vim is not installed in this console. The only way to edit is Nano.

nano /target/etc/fstab

Edit your fstab file to look similar to the one in the screenshot above. I am pasting the fstab file contents below for easy reference.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/sda2_crypt /        btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@ 0       0
/dev/mapper/sda2_crypt /home    btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@home 0       0
# /boot was on /dev/sda1 during installation
UUID=12842b16-d3b3-44b4-878a-beb1e6362fbc /boot           ext4    defaults        0       2
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0

Please double check the fstab file before saving it. In Nano, you can press Ctrl+O followed by pressing Enter to save the file. Then press Ctrl+X to quit Nano. Now, preview the fstab file by running

cat /target/etc/fstab

and verify that the entries are correct, otherwise you will booted to an unusable and broken system after the installation is complete.

Next, press Ctrl + Alt + F1 to go back to the installer.

Proceed to “Install the base system.”
Screenshot of Debian installer installing the base system.

Screenshot of Debian installer installing the base system.

I chose the default option here - linux-image-amd64.

After this, the installer will ask you a few more questions. For desktop environment, I chose KDE Plasma. You can choose the desktop environment as per your liking. I will not cover the rest of the installation process and assume that you were able to install from here.

Post installation

Let’s jump to our freshly installed Debian system. Since I created a root user, I added the user ravi to the suoders file (/etc/sudoers) so that ravi can run commands with sudo. Follow this if you would like to do the same.

Now we set up zram as swap. First, install zram-tools.

sudo apt install zram-tools

Now edit the file /etc/default/zramswap and make sure to have the following lines are uncommented:

ALGO=lz4
PERCENT=50

Now, run

sudo systemctl restart zramswap

If you run lsblk now, you should see the below-mentioned entry in the output:

zram0          253:0    0   7.8G  0 disk  [SWAP]

This shows us that zram has been activated as swap.

Now we install timeshift, which can be done by running

sudo apt install timeshift

After the installation is complete, run Timeshift and schedule snapshots as you please. We are done now. Hope the tutorial was helpful.

See you in the next post and let me know if you have any suggestions and questions on this tutorial.

29 August, 2025 08:23PM

Raju Devidas

Fixing Auto-Rotate screen orientation on PostmarketOS devices running MATE DE

Fixing Auto-Rotate screen orientation on PostmarketOS devices running MATE DE

I have been using my Samsung Galaxy Tab A (2015) with PostmarketOS on and off since last year. It serves as a really good e-book reader with KOReader installed on it.

Have tried phosh and plasma-mobile on it, works nicely but slows the device down heavily (2 GB RAM and old processor) so I use MATE Desktop environment on it.

Lately I have started using this tablet along with my laptop as a second screen for work. And it has been working super nicely for that. The only issue being that I have to manually rotate the screen to landscape every time I reboot the device. It resets the screen orientation to portrait after a reboot. So I went through the pmOS wiki and a neat nice hack documented there worked very well for me.

First we will test if the auto-rotate sensor works and if we can read values from it. So we install some basic necessary packages

$ sudo apk add xrandr xinput inotify-tools iio-sensor-proxy

Enable the service for iio-sensor-proxy

sudo rc-update add iio-sensor-proxy

Reboot the device.

Now in the device terminal start the sensor monitor-sensor

user@samsung-gt58 ~> monitor-sensor
    Waiting for iio-sensor-proxy to appear
+++ iio-sensor-proxy appeared
=== Has accelerometer (orientation: normal, tilt: vertical)
=== Has ambient light sensor (value: 5.000000, unit: lux)
=== No proximity sensor
=== No compass
    Light changed: 14.000000 (lux)
    Accelerometer orientation changed: left-up
    Tilt changed: tilted-down
    Light changed: 12.000000 (lux)
    Tilt changed: vertical
    Light changed: 13.000000 (lux)
    Light changed: 11.000000 (lux)
    Light changed: 13.000000 (lux)
    Accelerometer orientation changed: normal
    Light changed: 5.000000 (lux)
    Light changed: 6.000000 (lux)
    Light changed: 5.000000 (lux)
    Accelerometer orientation changed: right-up
    Light changed: 3.000000 (lux)
    Light changed: 4.000000 (lux)
    Light changed: 5.000000 (lux)
    Light changed: 12.000000 (lux)
    Tilt changed: tilted-down
    Light changed: 19.000000 (lux)
    Accelerometer orientation changed: bottom-up
    Tilt changed: vertical
    Light changed: 1.000000 (lux)
    Light changed: 2.000000 (lux)
    Light changed: 4.000000 (lux)
    Accelerometer orientation changed: right-up
    Tilt changed: tilted-down
    Light changed: 11.000000 (lux)
    Accelerometer orientation changed: normal
    Tilt changed: vertical
    Tilt changed: tilted-down
    Light changed: 18.000000 (lux)
    Light changed: 21.000000 (lux)
    Light changed: 22.000000 (lux)
    Light changed: 19.000000 (lux)
    Accelerometer orientation changed: left-up
    Light changed: 17.000000 (lux)
    Tilt changed: vertical
    Light changed: 14.000000 (lux)
    Tilt changed: tilted-down
    Light changed: 16.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)

As you can see we can read the rotation values from the sensor as I am rotating the tablet in different orientations.

Now we just need to use a script which changes the screen orientation using xrandr according to the sensor value.

#!/bin/sh

killall monitor-sensor
monitor-sensor > /dev/shm/sensor.log 2>&1 &

while inotifywait -e modify /dev/shm/sensor.log; do

  ORIENTATION=$(tail /dev/shm/sensor.log | grep &aposorientation&apos | tail -1 | grep -oE &apos[^ ]+$&apos)

  case "$ORIENTATION" in

    normal)
      xrandr -o normal
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" 1 0 0 0 1 0 0 0 1
      ;;
    left-up)
      xrandr -o left
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" 0 -1 1 1 0 0 0 0 1
      ;;
    bottom-up)
      xrandr -o inverted
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" -1 0 1 0 -1 1 0 0 1
      ;;
    right-up)
      xrandr -o right
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" 0 1 0 -1 0 1 0 0 1
      ;;

  esac
done

auto-rotate-screen.sh

You need to replace the name of your touch input device in the script, you can get the name by using xinput --list , make sure to type this on the device terminal.

user@samsung-gt58 ~> xinput --list
* Virtual core pointer                    	id=2	[master pointer  (3)]
*   * Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]
*   * Zinitix Capacitive TouchScreen          	id=10	[slave  pointer  (2)]
*   * Toad One Plus                           	id=12	[slave  pointer  (2)]
* Virtual core keyboard                   	id=3	[master keyboard (2)]
    * Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]
    * GPIO Buttons                            	id=6	[slave  keyboard (3)]
    * pm8941_pwrkey                           	id=7	[slave  keyboard (3)]
    * pm8941_resin                            	id=8	[slave  keyboard (3)]
    * Zinitix Capacitive TouchScreen          	id=11	[slave  keyboard (3)]
    * samsung-a2015 Headset Jack              	id=9	[slave  keyboard (3)]

In our script here we are using a Zinitix capacitive screen, it will be different for yours.

Once your script is ready with the correct touchscreen name. Save and make the script executable. chmod +x auto-rotate-screen.sh

Then test your script in your terminal ./auto-rotate.sh , stop the script using Ctrl + C

Now we need add this script to auto-start. On MATE DE you can go to System > Control Center > Startup Applications, then click on Custom Add button, browse the script location, give it a name and then click on Add button.

Now reboot the tablet/device, login and see the auto rotation working.

0:00
/0:09


  1. Auto-Rotation wiki article on PostmarketOS Wiki https://wiki.postmarketos.org/wiki/Auto-rotation

29 August, 2025 07:40PM by Raju Vindane

Noah Meyerhans

Determining Network Online Status of Dualstack Cloud VMs

When a Debian cloud VM boots, it typically runs cloud-init at various points in the boot process. Each invocation can perform certain operations based on the host’s static configuration passed by the user, typically either through a well known link-local network service or an attached iso9660 drive image. Some of the cloud-init steps execute before the network comes up, and others at a couple of different points after the network is up.

I recently encountered an unexpected issue when configuring a dualstack (uses both IPv6 and legacy IPv4 networking) VM to use a custom apt server accessible only via IPv6. VM provisioning failed because it was unable to access the server in question, yet when I logged in to investigate, it was able to access the server without any problem. The boot had apparently gone smoothly right up until cloud-init’s Package Update Upgrade Install module called apt-get update, which failed and broke subsequent provisioning steps. The errors reported by apt-get indicated that there was no route to the service in question, which more accurately probably meant that there was not yet a route to the service. But there was shortly after, when I investigated.

This was surprising because the apt-get invocations occur in a cloud-init sequence that’s explicitly ordered after the network is configured according to systemd-networkd-wait-online. Investigation eventually led to similar issues encountered in other environments reported in Debian bug #1111791, “systemd: network-online.target reached before IPv6 address is ready”. The issue described in that bug is identical to mine, but the bug is tagged wontfix. The behavior is considered correct.

Why the default behavior is the correct one

While it’s a bit counterintuitive, the systemd-networkd behavior is correct, and it’s also not something we’d want to override in the cloud images. Without explicit configuration, systemd can’t accurately infer the intended network configuration of a given system. If a system is IPv6-only, systemd-networkd-wait-online will introduce unexpected delays in the boot process if it waits for IPv4, and vice-versa. If it assumes dualstack, things are even worse because it would block for a long time (approximately two minutes) in any single stack network before failing, leaving the host in degraded state. So the most reasonable default behavior is to block until any protocol is configured.

For these same reasons, we can’t change the systemd-networkd-wait-online configuration in our cloud images. All of the cloud environments we support offer both single stack and dual stack networking, so we preserve systemd’s default behavior.

What’s causing problems here is that IPv6 takes significantly longer to configure due to its more complex router solicitation + router advertisement + DHCPv6 setup process. So in this particular case, where I’ve got a dualstack VM that needs to access a v6-only apt server during the provisioning process, I need to find some mechanism to override systemd’s default behavior and wait for IPv6 connectivity specifically.

What won’t work

Cloud-init offers the ability to write out arbitrary files during provisioning. So writing a drop-in for systemd-networkd-wait-online.service is trivial. Unfortunately, this doesn’t give us everything we actually need. We still need to invoke systemctl daemon-reload to get systemd to actually apply the changes after we’ve written them, and of course we need to do that before the service actually runs. Cloud-init provides a bootcmd module that lets us run shell commands “very early in the boot process”, but it runs too early: it runs before we’ve written out our configuration files. Similarly, it provides a runcmd module, but scripts there are towards the end of the boot process, far too late to be useful.

Instead of using the bootcmd facility, to simply reload systemd’s config, it seemed possible that we could both write the config and trigger the reload, similar to the following:

 bootcmd:
- mkdir -p /etc/systemd/system/systemd-networkd-wait-online.service.d
- echo "[Service]" > /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- systemctl daemon-reload

But even that runs too late, as we can see in the logs that systemd-networkd-wait-online.service has completed before bootcmd is executed:

root@sid-tmp2:~# journalctl --no-pager -l -u systemd-networkd-wait-online.service
Aug 29 17:02:12 sid-tmp2 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Aug 29 17:02:13 sid-tmp2 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured
.
root@sid-tmp2:~# grep -F 'config-bootcmd ran' /var/log/cloud-init.log
2025-08-29 17:02:14,766 - handlers.py[DEBUG]: finish: init-network/config-bootcmd: SUCCESS: config-bootcmd ran successfully and took 0.467 seconds

At this point, it’s looking like there are few options left!

What eventually worked

I ended up identifying two solutions to the issue, both of which involve getting some other component of the provisioning process to run systemd-networkd-wait-online.

Solution 1

The first involves getting apt-get itself to wait for IPv6 configuration. The apt.conf configuration interface allows the definition of an APT::Update::Pre-Invoke hook that’s executed just before apt’s update operation. By writing the following to a file in /etc/apt/apt.conf.d/, we’re able to ensure that we have IPv6 connectivity before apt-get tries accessing the network. This cloud-config snippet accomplishes that:

 - path: /etc/apt/apt.conf.d/99-wait-for-ipv6
content: |
APT::Update::Pre-Invoke { "/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6"; }

This is safe to leave in place after provisioning, because the delay will be negligible once IPv6 connectivity is established. It’s only during address configuration that it’ll block for a noticeable amount of time, but that’s what we want.

This solution isn’t entirely correct, though, because it’s only apt-get that’s actually affected by it. Other service that start after the system is ostensibly “online” might only see IPv4 connectivity when they start. This seems acceptable at the moment, though.

Solution 2

The second solution is to simply invoke systemd-networkd-wait-online directly from a cloud-init bootcmd. Similar to the first solution, it’s not exactly correct because the host has already reached network-online.target, but it does block enough of cloud-init that package installation happens only after it completes. The cloud-config snippet for this is

bootcmd:
- [/usr/lib/systemd/systemd-networkd-wait-online, --operational-state=routable, --any, --ipv6]

In either case, we still want to write out a snippet to configure systemd-networkd-wait-online to wait for IPv6 connectivity for future reboots. Even though cloud-init won’t necessarily run in those cases, and many cloud VMs never reboot at all, it does complete the solution. Additionally, it solves the problem for any derivative images that may be created based on the running VM’s state. (At least if we can be certain that instances of those derivative images will never run in an IPv4-only network!)

write_files:
- path: /run/systemd/system/systemd-networkd-wait-online.service.d/99-ipv6-wait.conf
content: |
[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online --any --operational-state=routable --ipv6

How to properly solve it

One possible improvement would be for cloud-init to support a configuration key allowing the admin to specify the required protocols. Based on the presence of this key, cloud-init could reconfigure systemd-networkd-wait-online.service accordingly. Alternatively it could set the appropriate RequiredFamilyForOnline= value in the generated .network file. cloud-init supports multiple network configuration backends, so each of those would need to be updated. If using the systemd-networkd configuration renderer, this should be straightforward, but Debian uses the netplan renderer, so that tool might also need to be taught to pass such a configuration along to systemd-networkd.

29 August, 2025 04:38PM by Noah Meyerhans (frodo+blog@morgul.net)

August 28, 2025

hackergotchi for Samuel Henrique

Samuel Henrique

Debian 13: My list of exciting new features

A bunch of screenshots overlaid on top of each other showing different tools: lazygit, gnome settings, gnome system monitor, powerline-go, and the wcurl logo, the text at the top says 'Debian 13: My list of exciting new features', and there's a Debian logo in the middle of image

Beyond Debian: Useful for other distros too

Every two years Debian releases a new major version of its Stable series, meaning the differences between consecutive Debian Stable releases represent two years of new developments both in Debian as an organization and its native packages, but also in all other packages which are also shipped by other distributions (which are getting into this new Stable release).

If you're not paying close attention to everything that's going on all the time in the Linux world, you miss a lot of the nice new features and tools. It's common for people to only realize there's a cool new trick available only years after it was first introduced.

Given these considerations, the tips that I'm describing will eventually be available in whatever other distribution you use, be it because it's a Debian derivative or because it just got the same feature from the upstream project.

I'm not going to list "passive" features (as good as they can be), the focus here is on new features that might change how you configure and use your machine, with a mix between productivity and performance.

Debian 13 - Trixie

I have been a Debian Testing user for longer than 10 years now (and I recommend it for non-server users), so I'm not usually keeping track of all the cool features arriving in the new Stable releases because I'm continuously receiving them through the Debian Testing rolling release.

Nonetheless, as a Debian Developer I'm in a good position to point out the ones I can remember. I would also like other Debian Developers to do the same as I'm sure I would learn something new.

The Debian 13 release notes contain a "What's new" section , which lists the first two items here and a few other things, in other words, take my list as an addition to the release notes.

Debian 13 was released on 2025-08-09, and these are nice things you shouldn't miss in the new release, with a bonus one not tied to the Debian 13 release.

1) wcurl

wcurl logo

Have you ever had to download a file from your terminal using curl and didn't remember the parameters needed? I did.

Nowadays you can use wcurl; "a command line tool which lets you download URLs without having to remember any parameters."

Simply call wcurl with one or more URLs as parameters and it will download all of them in parallel, performing retries, choosing the correct output file name, following redirects, and more.

Try it out:

wcurl example.com

wcurl comes installed as part of the curl package on Debian 13 and in any other distribution you can imagine, starting with curl 8.14.0.

I've written more about wcurl in its release announcement and I've done a lightning talk presentation in DebConf24, which is linked in the release announcement.

2) HTTP/3 support in curl

Debian has become the first stable Linux distribution to ship curl with support for HTTP/3. I've written about this in July 2024, when we first enabled it. Note that we first switched the curl CLI to GnuTLS, but then ended up releasing the curl CLI linked with OpenSSL (as support arrived later).

Debian was the first Linux distro to enable it in the default build of the curl package, but Gentoo enabled it a few weeks earlier in their non-default flavor of the package, kudos to them!

HTTP/3 is not used by default by the curl CLI, you have to enable it with --http3 or --http3-only.

Try it out:

curl --http3 https://www.example.org
curl --http3-only https://www.example.org

3) systemd soft-reboot

Starting with systemd v254, there's a new soft-reboot option, it's an userspace-only reboot, much faster than a full reboot if you don't need to reboot the kernel.

You can read the announcement from the systemd v254 GitHub release

Try it out:

# This will reboot your machine!
systemctl soft-reboot

4) apt --update

Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I!

The new --update option lets you do both things in a single command:

sudo apt --update upgrade
sudo apt --update install $PACKAGE

I love this, but it's still not yet where it should be, fingers crossed for a simple apt upgrade to behave like other package managers by updating its cache as part of the task, maybe in Debian 14?

Try it out:

sudo apt upgrade --update
# The order doesn't matter
sudo apt --update upgrade

This is especially handy for container usage, where you have to update the apt cache before installing anything, for example:

podman run debian:stable bin/bash -c 'apt install --update -y curl'

5) powerline-go

powerline-go is a powerline-style prompt written in Golang, so it's much more performant than its Python alternative powerline.

powerline-style prompts are quite useful to show things like the current status of the git repo in your working directory, exit code of the previous command, presence of jobs in the background, whether or not you're in an ssh session, and more.

A screenshot of a terminal with powerline-go enabled, showing how the PS1 changes inside a git repository and when the last command fails

Try it out:

sudo apt install powerline-go

Then add this to your .bashrc:

function _update_ps1() {
    PS1="$(/usr/bin/powerline-go -error $? -jobs $(jobs -p | wc -l))"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
}

if [ "$TERM" != "linux" ] && [ -f "/usr/bin/powerline-go" ]; then
    PROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"
fi

Or this to .zshrc:

function powerline_precmd() {
    PS1="$(/usr/bin/powerline-go -error $? -jobs ${${(%):%j}:-0})"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
}

If you'd like to have your prompt start in a newline, like I have in the screenshot above, you just need to set -newline in the powerline-go invocation in your .bashrc/.zshrc.

6) Gnome System Monitor Extension

Tips number 6 and 7 are for Gnome users.

Gnome is now shipping a system monitor extension which lets you get a glance of the current load of your machine from the top bar.

Screenshot of the top bar of Gnome with the system monitor extension enabled, showing the load of: CPU, memory, network and disk

I've found this quite useful for machines where I'm required to install third-party monitoring software that tends to randomly consume more resources than it should. If I feel like my machine is struggling, I can quickly glance at its load to verify if it's getting overloaded by some process.

The extension is not as complete as system-monitor-next, not showing temperatures or histograms, but at least it's officially part of Gnome, easy to install and supported by them.

Try it out:

sudo apt install gnome-system-monitor gnome-shell-extension-manager

And then enable the extension from the "Extension Manager" application.

7) Gnome setting for battery charging profile

After having to learn more about batteries in order to get into FPV drones, I've come to have a bigger appreciation for solutions that minimize the inevitable loss of capacity that accrues over time.

There's now a "Battery Charging" setting (under the "Power") section which lets you choose between two different profiles: "Maximize Charge" and "Preserve Battery Health".

A screenshot of the Gnome settings for Power showing the options for Battery Charging

On supported laptops, this setting is an easy way to set thresholds for when charging should start and stop, just like you could do it with the tlp package, but now from the Gnome settings.

To increase the longevity of my laptop battery, I always keep it at "Preserve Battery Health" unless I'm traveling.

What I would like to see next is support for choosing different "Power Modes" based on whether the laptop is plugged-in, and based on the battery charge percentage.

There's a GNOME issue tracking this feature, but there's some pushback on whether this is the right thing to expose to users.

In the meantime, there are some workarounds mentioned in that issue which people who really want this feature can follow.

If you would like to learn more about batteries; Battery University is a great starting point, besides getting into FPV drones and being forced to handle batteries without a Battery Management System (BMS).

And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's YouTube channel is a great resource: @JoshuaBardwell.

8) Lazygit

Emacs users are already familiar with the legendary magit; a terminal-based UI for git.

Lazygit is an alternative for non-emacs users, you can integrate it with neovim or just use it directly.

I'm still playing with lazygit and haven't integrated it into my workflows, but so far it has been a pleasant experience.

Screenshot of lazygit from the Debian curl repository, showing a selected commit and its diff, besides the other things from the lazygit UI

You should check out the demos from the lazygit GitHub page.

Try it out:

sudo apt install lazygit

And then call lazygit from within a git repository.

9) neovim

neovim has been shipped in Debian since 2016, but upstream has been doing a lot of work to improve the experience out-of-the-box in the last couple of years.

If you're a neovim poweruser, you're likely not installing it from the official repositories, but for those that are, Debian 13 comes with version 0.10.4, which brings the following improvements compared to the version in Debian 12:

  • Treesitter support for C, Lua, Markdown, with the possibility of adding any other languages as needed;

  • Better spellchecking due to treesitter integration (spellsitter);

  • Mouse support enabled by default;

  • Commenting support out-of-the-box;

    Check :h commenting for details, but the tl;dr is that you can use gcc to comment the current line and gc to comment the current selection.

  • OSC52 support.

    Especially handy for those using neovim over an ssh connection, this protocol lets you copy something from within the neovim process into the clipboard of the machine you're using to connect through ssh. In other words, you can copy from neovim running in a host over ssh and paste it in the "outside" machine.

10) [Bonus] Running old Debian releases

The bonus tip is not specific to the Debian 13 release, but something I've recently learned in the #debian-devel IRC channel.

Did you know there are usable container images for all past Debian releases? I'm not talking "past" as in "some of the older releases", I'm talking past as in "literally every Debian release, including the very first one".

Tianon Gravi "tianon" is the Debian Developer responsible for making this happen, kudos to him!

There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a 32-bit host, otherwise you will get the error Out of virtual memory!, but starting with Bo (1.3) all should work in amd64/arm64.

Try it out:

sudo apt install podman

podman run -it docker.io/debian/eol:bo

Don't be surprised when noticing that apt/apt-get is not available inside the container, that's because apt first appeared in Debian Slink (2.1).

28 August, 2025 05:30PM by Unknown

Debian 13: My list of exciting new features

A bunch of screenshots overlaid on top of each other showing different tools: lazygit, gnome settings, gnome system monitor, powerline-go, and the wcurl logo, the text at the top says 'Debian 13: My list of exciting new features', and there's a Debian logo in the middle of image

Beyond Debian: Useful for other distros too

Every two years Debian releases a new major version of its Stable series, meaning the differences between consecutive Debian Stable releases represent two years of new developments both in Debian as an organization and its native packages, but also in all other packages which are also shipped by other distributions (which are getting into this new Stable release).

If you're not paying close attention to everything that's going on all the time in the Linux world, you miss a lot of the nice new features and tools. It's common for people to only realize there's a cool new trick available only years after it was first introduced.

Given these considerations, the tips that I'm describing will eventually be available in whatever other distribution you use, be it because it's a Debian derivative or because it just got the same feature from the upstream project.

I'm not going to list "passive" features (as good as they can be), the focus here is on new features that might change how you configure and use your machine, with a mix between productivity and performance.

Debian 13 - Trixie

I have been a Debian Testing user for longer than 10 years now (and I recommend it for non-server users), so I'm not usually keeping track of all the cool features arriving in the new Stable releases because I'm continuously receiving them through the Debian Testing rolling release.

Nonetheless, as a Debian Developer I'm in a good position to point out the ones I can remember. I would also like other Debian Developers to do the same as I'm sure I would learn something new.

The Debian 13 release notes contain a "What's new" section , which lists the first two items here and a few other things, in other words, take my list as an addition to the release notes.

Debian 13 was released on 2025-08-09, and these are nice things you shouldn't miss in the new release, with a bonus one not tied to the Debian 13 release.

1) wcurl

wcurl logo

Have you ever had to download a file from your terminal using curl and didn't remember the parameters needed? I did.

Nowadays you can use wcurl; "a command line tool which lets you download URLs without having to remember any parameters."

Simply call wcurl with one or more URLs as parameters and it will download all of them in parallel, performing retries, choosing the correct output file name, following redirects, and more.

Try it out:

wcurl example.com

wcurl comes installed as part of the curl package on Debian 13 and in any other distribution you can imagine, starting with curl 8.14.0.

I've written more about wcurl in its release announcement and I've done a lightning talk presentation in DebConf24, which is linked in the release announcement.

2) HTTP/3 support in curl

Debian has become the first stable Linux distribution to ship curl with support for HTTP/3. I've written about this in July 2024, when we first enabled it. Note that we first switched the curl CLI to GnuTLS, but then ended up releasing the curl CLI linked with OpenSSL (as support arrived later).

Debian was the first stable Linux distro to enable it, and within rolling-release-based distros; Gentoo enabled it first in their non-default flavor of the package and Arch Linux did it three months before we pushed it to Debian Unstable/Testing/Stable-backports, kudos to them!

HTTP/3 is not used by default by the curl CLI, you have to enable it with --http3 or --http3-only.

Try it out:

curl --http3 https://www.example.org
curl --http3-only https://www.example.org

3) systemd soft-reboot

Starting with systemd v254, there's a new soft-reboot option, it's an userspace-only reboot, much faster than a full reboot if you don't need to reboot the kernel.

You can read the announcement from the systemd v254 GitHub release

Try it out:

# This will reboot your machine!
systemctl soft-reboot

4) apt --update

Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I!

The new --update option lets you do both things in a single command:

sudo apt --update upgrade
sudo apt --update install $PACKAGE

I love this, but it's still not yet where it should be, fingers crossed for a simple apt upgrade to behave like other package managers by updating its cache as part of the task, maybe in Debian 14?

Try it out:

sudo apt upgrade --update
# The order doesn't matter
sudo apt --update upgrade

This is especially handy for container usage, where you have to update the apt cache before installing anything, for example:

podman run debian:stable bin/bash -c 'apt install --update -y curl'

5) powerline-go

powerline-go is a powerline-style prompt written in Golang, so it's much more performant than its Python alternative powerline.

powerline-style prompts are quite useful to show things like the current status of the git repo in your working directory, exit code of the previous command, presence of jobs in the background, whether or not you're in an ssh session, and more.

A screenshot of a terminal with powerline-go enabled, showing how the PS1 changes inside a git repository and when the last command fails

Try it out:

sudo apt install powerline-go

Then add this to your .bashrc:

function _update_ps1() {
    PS1="$(/usr/bin/powerline-go -error $? -jobs $(jobs -p | wc -l))"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
}

if [ "$TERM" != "linux" ] && [ -f "/usr/bin/powerline-go" ]; then
    PROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"
fi

Or this to .zshrc:

function powerline_precmd() {
    PS1="$(/usr/bin/powerline-go -error $? -jobs ${${(%):%j}:-0})"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
}

If you'd like to have your prompt start in a newline, like I have in the screenshot above, you just need to set -newline in the powerline-go invocation in your .bashrc/.zshrc.

6) Gnome System Monitor Extension

Tips number 6 and 7 are for Gnome users.

Gnome is now shipping a system monitor extension which lets you get a glance of the current load of your machine from the top bar.

Screenshot of the top bar of Gnome with the system monitor extension enabled, showing the load of: CPU, memory, network and disk

I've found this quite useful for machines where I'm required to install third-party monitoring software that tends to randomly consume more resources than it should. If I feel like my machine is struggling, I can quickly glance at its load to verify if it's getting overloaded by some process.

The extension is not as complete as system-monitor-next, not showing temperatures or histograms, but at least it's officially part of Gnome, easy to install and supported by them.

Try it out:

sudo apt install gnome-system-monitor gnome-shell-extension-manager

And then enable the extension from the "Extension Manager" application.

7) Gnome setting for battery charging profile

After having to learn more about batteries in order to get into FPV drones, I've come to have a bigger appreciation for solutions that minimize the inevitable loss of capacity that accrues over time.

There's now a "Battery Charging" setting (under the "Power") section which lets you choose between two different profiles: "Maximize Charge" and "Preserve Battery Health".

A screenshot of the Gnome settings for Power showing the options for Battery Charging

On supported laptops, this setting is an easy way to set thresholds for when charging should start and stop, just like you could do it with the tlp package, but now from the Gnome settings.

To increase the longevity of my laptop battery, I always keep it at "Preserve Battery Health" unless I'm traveling.

What I would like to see next is support for choosing different "Power Modes" based on whether the laptop is plugged-in, and based on the battery charge percentage.

There's a GNOME issue tracking this feature, but there's some pushback on whether this is the right thing to expose to users.

In the meantime, there are some workarounds mentioned in that issue which people who really want this feature can follow.

If you would like to learn more about batteries; Battery University is a great starting point, besides getting into FPV drones and being forced to handle batteries without a Battery Management System (BMS).

And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's YouTube channel is a great resource: @JoshuaBardwell.

8) Lazygit

Emacs users are already familiar with the legendary magit; a terminal-based UI for git.

Lazygit is an alternative for non-emacs users, you can integrate it with neovim or just use it directly.

I'm still playing with lazygit and haven't integrated it into my workflows, but so far it has been a pleasant experience.

Screenshot of lazygit from the Debian curl repository, showing a selected commit and its diff, besides the other things from the lazygit UI

You should check out the demos from the lazygit GitHub page.

Try it out:

sudo apt install lazygit

And then call lazygit from within a git repository.

9) neovim

neovim has been shipped in Debian since 2016, but upstream has been doing a lot of work to improve the experience out-of-the-box in the last couple of years.

If you're a neovim poweruser, you're likely not installing it from the official repositories, but for those that are, Debian 13 comes with version 0.10.4, which brings the following improvements compared to the version in Debian 12:

  • Treesitter support for C, Lua, Markdown, with the possibility of adding any other languages as needed;

  • Better spellchecking due to treesitter integration (spellsitter);

  • Mouse support enabled by default;

  • Commenting support out-of-the-box;

    Check :h commenting for details, but the tl;dr is that you can use gcc to comment the current line and gc to comment the current selection.

  • OSC52 support.

    Especially handy for those using neovim over an ssh connection, this protocol lets you copy something from within the neovim process into the clipboard of the machine you're using to connect through ssh. In other words, you can copy from neovim running in a host over ssh and paste it in the "outside" machine.

10) [Bonus] Running old Debian releases

The bonus tip is not specific to the Debian 13 release, but something I've recently learned in the #debian-devel IRC channel.

Did you know there are usable container images for all past Debian releases? I'm not talking "past" as in "some of the older releases", I'm talking past as in "literally every Debian release, including the very first one".

Tianon Gravi "tianon" is the Debian Developer responsible for making this happen, kudos to him!

There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a 32-bit host, otherwise you will get the error Out of virtual memory!, but starting with Bo (1.3) all should work in amd64/arm64.

Try it out:

sudo apt install podman

podman run -it docker.io/debian/eol:bo

Don't be surprised when noticing that apt/apt-get is not available inside the container, that's because apt first appeared in Debian Slink (2.1).

Changes since publication

2025-08-30

  • Mention that Arch also enabled HTTP/3.

28 August, 2025 05:30PM by Unknown

Valhalla's Things

1840s Underwear

Posted on August 28, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A woman wearing a knee-length shift with very short pleated sleeves and drawers that are a bit longer than needed to be ankle-length. The shift is too wide at the top, had to have a pleat taken in the center front, but the sleeves are still falling down. She is also wearing a black long sleeved t-shirt and leggings under said underwear, for decency.

A bit more than a year ago, I had been thinking about making myself a cartridge pleated skirt. For a number of reasons, one of which is the historybounding potential, I’ve been thinking pre-crinoline, so somewhere around the 1840s, and that’s a completely new era for me, which means: new underwear.

Also, the 1840s are pre-sewing machine, and I was already in a position where I had more chances to handsew than to machine sew, so I decided to embrace the slowness and sew 100% by hand, not even using the machine for straight seams.

A woman turning fast enough that her petticoat extends a considerable distance from the body. The petticoat is white with a pattern of cording from the hem to just below hip level, with a decreasing number of rows of cording going up.

If I remember correctly, I started with the corded petticoat, looking around the internet for instructions, and then designing my own based on the practicality of using modern wide fabric from my stash (and specifically some DITTE from costumers’ favourite source of dirty cheap cotton IKEA).

Around the same time I had also acquired a sashiko kit, and I used the Japanese technique for sewing running stitches pushing the needle with a thimble that covers the base of the middle finger, and I can confirm that for this kind of things it’s great!

I’ve since worn the petticoat a few times for casual / historyBounding / folkwearBounding reasons, during the summer, and I can confirm it’s comfortable to use; I guess that during the winter it could be nice to add a flannel layer below it.

The technical drawing and pattern for drawers from the book: each leg is cut out of a rectangle of fabric folded along the length, the leg is tapered equally, while the front is tapered more than the back, and comes to a point below the top of the original rectangle.

Then I proceeded with the base layers: I had been browsing through The workwoman's guide and that provided plenty of examples, and I selected the basic ankle-length drawers from page 53 and the alternative shift on page 47.

As for fabric, I had (and still have) a significant lack of underwear linen in my stash, but I had plenty of cotton voile that I had not used in a while: not very historically accurate for plain underwear, but quite suitable for a wearable mockup.

Working with a 1830s source had an interesting aspect: other of the usual, mildly annoying, imperial units, it also used a lot a few obsolete units, especially nails, that my qalc, my usual calculator and converter, doesn’t support. Not a big deal, because GNU units came to the rescue, and that one knows a lot of obscure and niche units, and it’s quite easy to add those that are missing1

Working on this project also made me freshly aware of something I had already noticed: converting instructions for machine sewing garments into instructions for hand sewing them is usually straightforward, but the reverse is not always true.

Starting from machine stitching, you can usually convert straight stitches into backstitches (or running backstitches), zigzag and overlocking into overcasting and get good results. In some cases you may want to use specialist hand stitches that don’t really have a machine equivalent, such as buttonhole stitches instead of simply overcasting the buttonhole, but that’s it.

Starting from hand stitching, instead, there are a number of techniques that could be converted to machine stitching, but involve a lot of visible topstitching that wasn’t there in the original instructions, or at times are almost impossible to do by machine, if they involve whipstitching together finished panels on seams that are subject to strong tension.

Anyway, halfway through working with the petticoat I cut both the petticoat and the drawers at the same time, for efficiency in fabric use, and then started sewing the drawers.

the top third or so of the drawers, showing a deep waistband that is closed with just one button at the top, and the front opening with finished edges that continue through the whole crotch, with just the overlap of fabric to provide coverage.

The book only provided measurements for one size (moderate), and my fabric was a bit too narrow to make them that size (not that I have any idea what hip circumference a person of moderate size was supposed to have), so the result is just wide enough to be comfortably worn, but I think that when I’ll make another pair I’ll try to make them a bit wider. On the other hand they are a bit too long, but I think that I’ll fix it by adding a tuck or two. Not a big deal, anyway.

The same woman as in the opening image from the back, the shift droops significantly in the center back, and the shoulder straps have fallen down on the top of the arms.

The shift gave me a bit more issues: I used the recommended gusset size, and ended up with a shift that was way too wide at the top, so I had to take a box pleat in the center front and back, which changed the look and wear of the garment. I have adjusted the instructions to make gussets wider, and in the future I’ll make another shift following those.

Even with the pleat, the narrow shoulder straps are set quite far to the sides, and they tend to droop, and I suspect that this is to be expected from the way this garment is made. The fact that there are buttonholes on the shoulder straps to attach to the corset straps and prevent the issue is probably a hint that this behaviour was to be expected.

The technical drawing of the shift from the book, showing a the top of the body, two trapezoidal shoulder straps, the pleated sleeves and a ruffle on the front edge.

I’ve also updated the instructions so that they shoulder straps are a bit wider, to look more like the ones in the drawing from the book.

Making a corset suitable for the time period is something that I will probably do, but not in the immediate future, but even just wearing the shift under a later midbust corset with no shoulder strap helps.

I’m also not sure what the point of the bosom gores is, as they don’t really give more room to the bust where it’s needed, but to the high bust where it’s counterproductive. I also couldn’t find images of original examples made from this pattern to see if they were actually used, so in my next make I may just skip them.

Sleeve detail, showing box pleats that are about 2 cm wide and a few mm distance from each other all along the circumference, neatly sewn into the shoulder strap on one side and the band at the other side.

On the other hand, I’m really happy with how cute the short sleeves look, and if2 I’ll ever make the other cut of shift from the same book, with the front flaps, I’ll definitely use these pleated sleeves rather than the straight ones that were also used at the time.

As usual, all of the patterns have been published on my website under a Free license:


  1. My ~/.units file currently contains definitions for beardseconds, bananas and the more conventional Nm and NeL (linear mass density of fibres).↩︎

  2. yeah, right. when.↩︎

28 August, 2025 12:00AM

August 27, 2025

Russell Coker

ZRAM and VMs

I’ve just started using zram for swap on VMs. The use of compression for swap in Linux apparently isn’t new, it’s been in the Linux kernel since version 3.2 (since 2012). But until recent years I hadn’t used it. When I started using Mobian (the Debian distribution for phones) zram was in the default setup, it basically works and I never needed to bother with it which is exactly what you want from such a technology. After seeing it’s benefits in Mobian I started using it on my laptops where it worked well.

Benefits of ZRAM

ZRAM means that instead of paging data to storage it is compressed to another part of RAM. That means no access to storage which is a significant benefit if storage is slow (typical for phones) or if storage wearing out is a problem.

For servers you typically have SSDs that are fast and last for significant write volumes, for example the 120G SSDs referenced in my blog post about swap (not) breaking SSD [1] are running well in my parents’ PC because they outlasted all the other hardware connected to them and 120G isn’t usable for anything more demanding than my parents use nowadays. Those are Intel 120G 2.5″ DC grade SATA SSDs. For most servers ZRAM isn’t a good choice as you can just keep doing IO on the SSDs for years.

A server that runs multiple VMs is a special case because you want to isolate the VMs from each other. Support for quotas for storage IO in Linux isn’t easy to configure while limiting the number of CPU cores is very easy. If a system or VM using ZRAM for swap starts paging excessively the bottleneck will be CPU, this probably isn’t going to be great on a phone with a slow CPU but on a server class CPU it will be less of a limit. Whether compression is slower or faster than SSD is a complex issue but it will definitely be just a limit for that VM. When I setup a VM server I want to have some confidence that a DoS attack or configuration error on one VM isn’t going to destroy the performance of other VMs. If the VM server has 4 cores (the smallest VM server I run) and no VM has more than 2 cores then I know that the system can still run adequately if half the CPU performance is being wasted.

Some servers I run have storage limits that make saving the disk space for swap useful. For servers I run in Hetzner (currently only one server but I have run up to 6 at various times in the past) the storage is often limited, Hetzner seems to typically have storage that is 8* the size of RAM so if you have many VMs configured with the swap that they might need in the expectation that usually at most one of them will be actually swapping then it can make a real difference to usable storage. 5% of storage used for swap files isn’t uncommon or unreasonable.

Big Servers

I am still considering the implications of zram on larger systems. If I have a ML server with 512G of RAM would it make sense to use it? It seems plausible that a system might need 550G of RAM and zram could make the difference between jobs being killed with OOM and the jobs just completing. The CPU overhead of compression shouldn’t be an issue as when you have dozens of cores in the system having one or two used for compression is no big deal. If a system is doing strictly ML work there will be a lot of data that can’t be compressed, so the question is how much of the memory is raw input data and the weights used for calculations and how much is arrays with zeros and other things that are easy to compress.

With a big server nothing less than 32G of swap will make much difference to the way things work and if you have 32G of data being actively paged then the fastest NVMe devices probably won’t be enough to give usable performance. As zram uses one “stream” per CPU code if you have 44 cores that means 44 compression streams which should handle greater throughput. I’ll write another blog post if I get a chance to test this.

27 August, 2025 05:19AM by etbe

hackergotchi for Matthew Palmer

Matthew Palmer

StrongBox: Simple, Safe Data Encryption for Rust

Some time ago, I wanted to encrypt a bunch of data in an application I was writing in Rust, mostly to be stored in a database, but also session cookies and sensitive configuration variables. Since Rust is widely known as a secure-yet-high-performance programming language, I was expecting that there would be a widely-used crate that gave me a secure, high-level interface to strong, safe cryptography. Imagine my surprise when I discovered that just… didn’t seem to exist.

Don’t get me wrong: Rust is replete with fast, secure, battle-tested cryptographic primitives. The RustCrypto group provides all manner of robust, widely-used crates for all manner of cryptography-related purposes. They’re the essential building blocks for practical cryptosystems, but using them directly in an application is somewhat akin to building a car from individual atoms of iron and carbon.

So I wrote my own high-level data encryption library, called it StrongBox, and have been happily encrypting and decrypting data ever since.

Cryptography So Simple Even I Can’t Get It Wrong

The core of StrongBox is the StrongBox trait, which has only two methods: encrypt and decrypt, each of which takes just two arguments. The first argument is the plaintext (for encrypt) or the ciphertext (for decrypt) to work on. The second argument is the encryption context, for use as Authenticated Additional Data, an important part of many uses of encryption.

There’s essentially no configuration or parameters to get wrong. You can’t choose the encryption algorithm, or block cipher mode, and you don’t have to worry about generating a secure nonce. You create a StrongBox with a key, and then you call encrypt and decrypt. That’s it.

Practical Cryptographic Affordances

Ok, ok… that’s not quite it. Because StrongBox is even easier to use than what I’ve described, thanks to the companion crate, StructBox.

When I started using StrongBox “in the wild”, it quickly became clear that what I almost always wanted to encrypt in my application wasn’t some ethereal “plaintext”. I wanted to encrypt things, specifically structs (and enums). So, through the magic of Rust derive macros, I built StructBox, which provides encrypt and decrypt operations on any Serde-able type. Given that using Serde encoders can be a bit fiddly to use, it’s virtually easier to get an encrypted, serialized struct than it is to get a plaintext serialized struct.

Key Problems in Cryptography

The thing about cryptography is that it largely turns all data security problems into key management problems. All the fancy cryptographic wonkery is for naught if you don’t manage the encryption keys well.

So, most of the fancy business in StrongBox isn’t the encryption and decryption, but instead solving problems around key management.

Different Keys for Different Purposes

Using the same key for all of your cryptographic needs is generally considered a really bad idea. It opens up all manner of risks, that are easily avoided if you use different keys for different things. However, having to maintain a big pile of different keys is a nightmare, so nobody’s going to do that.

Enter: key derivation. Create one safe, secure “root” key, and then use a key derivation function to spawn as many other keys as you need. Different keys for each database column, another one to encrypt cookies, and so on.

StrongBox supports this through the StemStrongBox type. You’ll typically start off by creating a StemStrongBox with the “root” key, and then derive whatever other StrongBoxes you need, for encrypting and decrypting different kinds of data.

You Spin Me Round…

Sometimes, keys need to be rotated. Whether that’s because you actually know (or even have reason to suspect) someone has gotten the key, or just because you’re being appropriately paranoid, sometimes key rotation has to happen.

As someone who has had to rotate keys in situations where such an eventuality was not planned for, I can say with some degree of authority: it absolutely sucks to have to do an emergency key rotation in a system that isn’t built to make that easy. That’s why StrongBox natively supports key rotation. Every StrongBox takes one encryption key, and an arbitrary number of decryption keys, and will automatically use the correct key to decrypt ciphertexts.

Will You Still Decrypt Me, Tomorrow?

In addition to “manual” key rotation, StrongBox also supports time-based key rotation with the RotatingStrongBox type. This comes in handy when you’re encrypting a lot of “ephemeral” data, like cookies (or server-side session data). It provides a way to automatically “expire” old data, and prevents attacks that become practical when large amounts of data are encrypted using a single key.

Invasion of the Invisible Salamanders!

I mostly mention this just because I love the name, but there is a kind of attack possible in common AEAD modes called the invisible salamanders attack. StrongBox implements mitigations against this, by committing to the key being used so that an attacker can’t forge a ciphertext that decrypts validly to different plaintexts when using different keys. This is why I love cryptography: everything sounds like absolute goddamn magic.

Call Me Crazy, Support Me Maybe?

If you’re coding in Rust (which you probably should be), encrypting your stored data (which you definitely should be), and StrongBox makes your life easier (which it really will), you can show your appreciation for my work by contributing to my open source code-fund. Simply by shouting me a refreshing beverage, you’ll be helping me out, and helping to grow the software commons. Alternately, if you’re looking for someone to Speak Rust to Computers on a professional basis, I’m available for contracts or full-time remote positions.

27 August, 2025 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

August 25, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

The comedy of computation, or, how I learned to stop worrying and love obsolescence

This post is an unpublished review for The comedy of computation, or, how I learned to stop worrying and love obsolescence

“The Comedy of Computation” is not an easy book to review. It is a much enjoyable book that analyzes several examples of how “being computational” has been approached across literary genres in the last century — how authors of stories, novels, theatrical plays and movies, focusing on comedic genres, have understood the role of the computer in defining human relations, reactions and even self-image.

Mangrum structures his work in six thematic chapters, where he presents different angles on human society: How have racial stereotypes advanced in human imagination and perception about a future where we interact with mechanical or computational partners (from mechanical tools performing jobs that were identified with racial profiles to intelligent robots that threaten to control society); the genericity of computers –and people– can be seen as generic, interchangeable characters, often fueled by the tendency people exhibit to confer anthropomorphic qualities to inanimate objects; people’s desire to be seen as “truly authentic”, regardless of what it ultimately means; romantic involvement and romance-led stories (with the computer seen as a facilitator for human-to-human romances, distractor away from them, or being itself a part of the couple); and the absurdity in antropomorphization, in comparing fundamentally different aspects such as intelligence and speed at solving mathematical operations, as well as the absurdity presented blatantly as such by several techno-utopian visions.

But presenting this as a linear set of concepts that are presented does not do justice to the book. Throughout the sections of each chapter, a different work serves as the axis — Novels and stories, Hollywood movies, Broadway plays, some covers for the Time magazine, a couple of presenting the would-be future, even a romantic comedy entirely written by “bots”. And for each of them, Benjamin Mangrum presents a very thorough analysis, drawing relations and comparing with contemporary works, but also with Shakespeare, classical Greek myths, and a very long etcætera. This book is hard to review because of the depth of work the author did: Reading it repeatedly made me look for other works, or at least longer references for them.

Still, despite being a work with such erudition, Mangrum’s text is easy and pleasant to read, without feeling heavy or written in an overly academic style. I very much enjoyed reading this book. It is certainly not a technical book about computers and society in any way; it is an exploration of human creativity and our understanding of the aspects the author has found as central to understanding the impact of computing on humankind.

However, there is one point I must mention before closing: I believe the editorial decision to present the work as a running text, with all the material conceptualized as footnotes presented as a separate, over 50 page long final chapter, detracts from the final result. Personally, I enjoy reading the footnotes because they reveal the author’s thought processes, even if they stray from the central line of thought. Even more, given my review copy was a PDF, I could not even keep said chapter open with one finger, bouncing back and forth. For all purposes, I missed out on the notes; now that I finished reading and stumbled upon that chapter, I know I missed an important part of the enjoyment.

25 August, 2025 04:35PM

Scarlett Gately Moore

A Bittersweet Farewell: My Final KDE Snap Release and the End of an Era

Today marks both a milestone and a turning point in my journey with open source software. I’m proud to announce the release of KDE Gear 25.08.0 as my final snap package release. You can find all the details about this exciting update at the official KDE announcement.

After much reflection and with a heavy heart, I’ve made the difficult decision to retire from most of my open source software work, including snap packaging. This wasn’t a choice I made lightly – it comes after months of rejections and silence in an industry I’ve loved and called home for over 20 years.

Passing the Torch

While I’m stepping back, I’m thrilled to share that the future of KDE snaps is in excellent hands. Carlos from the Neon team has been working tirelessly to set up snaps on the new infrastructure that KDE has made available. This means building snaps in KDE CI is now possible – a significant leap forward for the ecosystem. I’ll be helping Carlos get the pipelines properly configured to ensure a smooth transition.

Staying Connected (But Differently)

Though I’m stepping away from most development work, I won’t be disappearing entirely from the communities that have meant so much to me:

  • Kubuntu: I’ll remain available as a backup, though Rik is doing an absolutely fabulous job getting the latest and greatest KDE packages uploaded. The distribution is in capable hands.
  • Ubuntu Community Council: I’m continuing my involvement here because I’ve found myself genuinely enjoying the community side of things. There’s something deeply fulfilling about focusing on the human connections that make these projects possible.
  • Debian: I’ll likely be submitting for emeritus status, as I haven’t had the time to contribute meaningfully and want to be honest about my current capacity.

The Reality Behind the Decision

This transition isn’t just about career fatigue – it’s about financial reality. I’ve spent too many years working for free while struggling to pay my bills. The recent changes in the industry, particularly with AI transforming the web development landscape, have made things even more challenging. Getting traffic to websites now requires extensive social media work and marketing – all expected to be done without compensation.

My stint at webwork was good while it lasted, but the changing landscape has made it unsustainable. I’ve reached a point where I can’t continue doing free work when my family and I are struggling financially. It shouldn’t take breaking a limb to receive the donations needed to survive.

A Career That Meant Everything

These 20+ years in open source have been the defining chapter of my professional life. I’ve watched communities grow, technologies evolve, and witnessed firsthand the incredible things that happen when passionate people work together. The relationships I’ve built, the problems we’ve solved together, and the software we’ve created have been deeply meaningful.

But I also have to be honest about where I stand today: I cannot compete in the current job market. The industry has changed, and despite my experience and passion, the opportunities just aren’t there for someone in my situation.

Looking Forward

Making a career change after two decades is terrifying, but it’s also necessary. I need to find a path that can provide financial stability for my family while still allowing me to contribute meaningfully to the world.

If you’ve benefited from my work over the years and are in a position to help during this transition, I would be forever grateful for any support. Every contribution, no matter the size, helps ease this difficult period: https://gofund.me/a9c55d8f

Thank You

To everyone who has collaborated with me, tested my packages, filed bug reports, offered encouragement, or simply used the software I’ve helped maintain – thank you. You’ve made these 20+ years worthwhile, and you’ve been part of something bigger than any individual contribution.

The open source world will continue to thrive because it’s built on the collective passion of thousands of people like Carlos, Rik, and countless others who are carrying the torch forward. While my active development days are ending, the impact of this community will continue long into the future.

With sincere gratitude and fond farewells,

Scarlett Moore

25 August, 2025 03:42PM by sgmoore

August 22, 2025

Matthias Geiger

Enforcing darkmode for QT programs under a non-QT based environment

I use sway as window manager on my main machine. As I prefer dark mode, I looked for a way to enable dark mode everywhere. For GTK-based this is fairly straightforward: Just install whatever theme you prefer, and apply it. However, QT-based applications on a non-QT based desktop will look …

22 August, 2025 10:00PM by Matthias Geiger

hackergotchi for Daniel Lange

Daniel Lange

Polkitd (Policy Kit Daemon) in Trixie ... allowing remote users to suspend, reboot, power off the local system

As per the previous Polkit blog post the policykit framwork has lost the ability to understand its own .pkla files and policies need to be expressed in Javascript with .rules files now.

To re-enable allowing remote users (think ssh) to reboot, hibernate, suspend or power off the local system, create a 10-shutdown-reboot.rules file in /etc/polkit-1/rules.d/:

polkit.addRule(function(action, subject) {
    if ((action.id == "org.freedesktop.login1.reboot-multiple-sessions" ||
         action.id == "org.freedesktop.login1.reboot" ||
         action.id == "org.freedesktop.login1.suspend-multiple-sessions" ||
         action.id == "org.freedesktop.login1.suspend" ||
         action.id == "org.freedesktop.login1.hibernate-multiple-sessions" ||
         action.id == "org.freedesktop.login1.hibernate" ||
         action.id == "org.freedesktop.login1.power-off-multiple-sessions" ||
         action.id == "org.freedesktop.login1.power-off") &&
        (subject.isInGroup("sudo") || (subject.user == "root")))
    {
        return polkit.Result.YES;
    }
});

and run systemctl restart polkit.

22 August, 2025 05:30PM by Daniel Lange

Russell Coker

Dell T320 H310 RAID and IT Mode

The Problem

Just over 2 years ago my Dell T320 server had a motherboard failure [1]. I recently bought another T320 that had been gutted (no drives, PSUs, or RAM) and put the bits from my one in it.

I installed Debian and the resulting installation wouldn’t boot, I tried installing with both UEFI and BIOS modes with the same result. Then I realised that the disks I had installed were available even though I hadn’t gone through the RAID configuration (I usually make a separate RAID-0 for each disk to work best with BTRFS or ZFS). I tried changing the BIOS setting for SATA disks between “RAID” and “AHCI” modes which didn’t change things and realised that the BIOS setting in question probably applies to the SATA connector on the motherboard and that the RAID card was in “IT” mode which means that each disk is seen separately.

If you are using ZFS or BTRFS you don’t want to use a RAID-1, RAID-5, or RAID-6 on the hardware RAID controller, if there are different versions of the data on disks in the stripe then you want the filesystem to be able to work out which one is correct. To use “IT” mode you have to flash a different unsupported firmware on the RAID controller and then you either have to go to some extra effort to make it bootable or have a different device to boot from.

The Root Causes

Dell has no reason to support unusual firmware on their RAID controllers. Installing different firmware on a device that is designed for high availability is going to have some probability of data loss and perhaps more importantly for Dell some probability of customers returning hardware during the support period and acting innocent about why it doesn’t work. Dell has a great financial incentive to make it difficult to install Dell firmware on LSI cards from other vendors which have equivalent hardware as they don’t want customers to get all the benefits of iDRAC integration etc without paying the Dell price premium.

All the other vendors have similar financial incentives so there is no official documentation or support on converting between different firmware images. Dell’s support for upgrading the Dell version is pretty good, but it aborts if it sees something different.

The Attempts

I tried following the instructions in this document to flash back to Dell firmware [2]. This document is about the H310 RAID card in my Dell T320 AKA a “LSI SAS 9211-8i”. The sas2flash.efi program didn’t seem to do anything, it returned immediately and didn’t give an error message.

This page gives a start of how to get inside the Dell firmware package but doesn’t work [3]. It didn’t cover the case where sasdupie aborts with an error because it detects the current version as “00.00.00.00” not something that the upgrade program is prepared to upgrade from. But it’s a place to start looking for someone who wants to try harder at this.

This forum post has some interesting information, I gave up before trying it, but it may be useful for someone else [4].

The Solution

Dell tower servers have as a standard feature an internal USB port for a boot device. So I created a boot image on a spare USB stick and installed it there and it then loads the kernel and mounts the filesystem from a SATA hard drive. Once I got that working everything was fine. The Debian/Trixie installer would probably have allowed me to install an EFI device on the internal USB stick as part of the install if I had known what was going to happen.

The system is now fully working and ready to sell. Now I just need to find someone who wants “IT” mode on the RAID controller and hopefully is willing to pay extra for it.

Whatever I sell the system for it seems unlikely to cover the hours I spent working on this. But I learned some interesting things about RAID firmware and hopefully this blog post will be useful to other people, even if only to discourage them from trying to change firmware.

22 August, 2025 03:57PM by etbe

Reproducible Builds (diffoscope)

diffoscope 305 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 305. This version includes the following changes:

[ Chris Lamb ]
* Upload to unstable/sid after the release of trixie.

You find out more by visiting the project homepage.

22 August, 2025 12:00AM

diffoscope 304 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 304. This version includes the following changes:

[ Chris Lamb ]
* Do not run jsondiff on files over 100KiB as the algorithm runs in O(n^2)
  time. (Closes: reproducible-builds/diffoscope#414)
* Fix test after the upload of systemd-ukify 258~rc3 (vs. 258~rc2).
* Move from a mono-utils dependency to versioned "mono-devel | mono-utils"
  dependency, taking care to maintain the [!riscv64] architecture
  restriction. (Closes: #1111742)
* Use sed -ne over awk -F= to to avoid mangling dependency lines containing
  equals signs (=), for example version restrictions.
* Use sed backreferences when generating debian/tests/control to avoid DRY
  violations.
* Update copyright years.

[ Martin Joerg ]
* Avoid a crash in the HTML presenter when page limit is None.

You find out more by visiting the project homepage.

22 August, 2025 12:00AM

August 21, 2025

hackergotchi for Matthew Palmer

Matthew Palmer

Progress on my open source funding experiment

When I recently announced that I was starting an open source crowd-funding experiment, I wasn’t sure what would happen. Perhaps there’d be radio silence, or a huge out-pouring of interest from people who wanted to see more open source code in the world. What’s happened so far has been… interesting.

I chose to focus on action-validator because it’s got a number of open feature requests, and it solves a common problem that people have. The thing is, I’ve developed and released a lot of open source over the multiple decades I’ve been noodling around with computers. Much of that has been of use to many people, the overwhelming majority of whom I will never, ever meet, hear from, or even know that I’ve helped them out.

One person, however, I do know about – a generous soul named Andy, who (as far as I know) doesn’t use action-validator, but who does use another tool I wrote some years ago: lvmsync. It’s somewhat niche, essentially “rsync for LVM-backed block devices”, so I’m slightly surprised that it’s my most-starred repository, at nearly 400(!) stars. Andy is one of the people who finds it useful, and he was kind enough to reach out and offer a contribution in thanks for lvmsync existing.

In the spirit of my open source code-fund, I applied Andy’s contribution to the “general” pool, and as a result have just released action-validator v0.8.0, which supports a new --rootdir command-line option, fixing action-validator issue #54. Everyone who uses --rootdir in their action-validator runs has Andy to thank, and I thank him too.

This is, of course, still early days in my experiment. You can be like Andy, and make the open source world a better place, by contributing to my code-fund, and you can get your name up in lights, too. Whether you’re an action-validator user, have gotten utility from any of the other things I’ve written, or just want to see more open source code in the world, your contribution is greatly appreciated.

21 August, 2025 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

August 20, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

x13binary 1.1.61.1 on CRAN: Micro Fix

The x13binary team is happy to share the availability of Release 1.1.61.1 of the x13binary package providing the X-13ARIMA-SEATS program by the US Census Bureau which arrived on CRAN earlier today.

This release responds to a recent change in gfortran version 15 which now picks up a missing comma in a Fortran format string for printing output. The change is literally a one-char addition which we also reported upstream. At the same time this release also updates one README.md URL to an archive.org URL of an apparently deleted reference. There is now also an updated upstream release 1.1-62 which we should package next.

Courtesy of my CRANberries, there is also a diffstat report for this release showing changes to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

20 August, 2025 09:51PM

Antoine Beaupré

Encrypting a Debian install with UKI

I originally setup a machine without any full disk encryption, then somehow regretted it quickly after. My original reasoning was that this was a "play" machine so I wanted as few restrictions on accessing the machine as possible, which meant removing passwords, mostly.

I actually ended up having a user password, but disabled the lock screen. Then I started using the device to manage my photo collection, and suddenly there was a lot of "confidential" information on the device that I didn't want to store in clear text anymore.

Pre-requisites

So, how does one convert an existing install from plain text to full disk encryption? One way is to backup to an external drive, re-partition everything and copy things back, but that's slow and boring. Besides, cryptsetup has a cryptsetup-reencrypt command, surely we can do this in place?

Having not set aside enough room for /boot, I briefly considered a "encrypted /boot" configuration and conversion (e.g. with this guide) but remembered grub's support for this is flaky, at best, so I figured I would try something else.

Here, I'm going to guide you through how I first converted from grub to systemd-boot then to UKI kernel, then re-encrypt my main partition.

Note that secureboot is disabled here, see further discussion below.

systemd-boot and Unified Kernel Image conversion

systemd folks have been developing UKI ("unified kernel image") to ship kernels. The way this works is the kernel and initrd (and UEFI boot stub) in a single portable executable that lives in the EFI partition, as opposed to /boot. This neatly solves my problem, because I already have such a clear-text partition and won't need to re-partition my disk to convert.

Debian has started some preliminary support for this. It's not default, but I found this guide from Vasudeva Kamath which was pretty complete. Since the guide assumes some previous configuration, I had to adapt it to my case.

Here's how I did the conversion to both systemd-boot and UKI, all at once. I could have perhaps done it one at a time, but doing both at once works fine.

Before your start, make sure secureboot is disabled, see the discussion below.

  1. install systemd tools:

    apt install systemd-ukify systemd-boot
    
  2. Configure systemd-ukify, in /etc/kernel/install.conf:

    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    

    TODO: it doesn't look like this generates a initrd with dracut, do we care?

  3. Configure the kernel boot arguments with the following in /etc/kernel/uki.conf:

    [UKI]
    Cmdline=@/etc/kernel/cmdline
    

    The /etc/kernel/cmdline file doesn't actually exist here, and that's fine. Defaults are okay, as the image gets generated from your current /proc/cmdline. Check your /etc/default/grub and /proc/cmdline if you are unsure. You'll see the generated arguments in bootctl list below.

  4. Build the image:

    dpkg-reconfigure linux-image-$(uname -r)
    
  5. Check the boot options:

    bootctl list
    

    Look for a Type #2 (.efi) entry for the kernel.

  6. Reboot:

    reboot
    

You can tell you have booted with systemd-boot because (a) you won't see grub and (b) the /proc/cmdline will reflect the configuration listed in bootctl list. In my case, a systemd.machine_id variable is set there, and not in grub (compare with /boot/grub/grub.cfg).

By default, the systemd-boot loader just boots, without a menu. You can force the menu to show up by un-commenting the timeout line in /boot/efit/loader/loader.conf, by hitting keys during boot (e.g. hitting "space" repeatedly), or by calling:

systemctl reboot --boot-loader-menu=0

See the systemd-boot(7) manual for details on that.

I did not go through the secureboot process, presumably I had already disabled secureboot. This is trickier: because one needs a "special key" to sign the UKI image, one would need the collaboration of debian.org to get this working out of the box with the keys shipped onboard most computers.

In other words, if you want to make this work with secureboot enabled on your computer, you'll need to figure out how to sign the generated images before rebooting here, because otherwise you will break your computer. Otherwise, follow the following guides:

Re-encrypting root filesystem

Now that we have a way to boot an encrypted filesystem, we can switch to LUKS for our filesystem. Note that you can probably follow this guide if, somehow, you managed to make grub work with your LUKS setup, although as this guide shows, you'd need to downgrade the cryptographic algorithms, which seems like a bad tradeoff.

We're using cryptsetup-reencrypt for this which, amazingly, supports re-encrypting devices on the fly. The trick is it needs free space at the end of the partition for the LUKS header (which, I guess, makes it a footer), so we need to resize the filesystem to leave room for that, which is the trickiest bit.

This is a possibly destructive behavior. Be sure your backups are up to date, or be ready to lose all data on the device.

We assume 512 byte sectors here. Check your sector size with fdisk -l and adjust accordingly.

  1. Before you perform the procedure, make sure requirements are installed:

    apt install cryptsetup systemd-cryptsetup cryptsetup-initramfs
    

    Note that this requires network access, of course.

  2. Reboot in a live image, I like GRML but any Debian live image will work, possibly including the installer

  3. First, calculate how many sectors to free up for the LUKS header

    qalc> 32Mibyte / ( 512 byte )
    
      (32 mebibytes) / (512 bytes) = 65536
    
  4. Find the sector sizes of the Linux partitions:

    fdisk  -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }' |
    

    For example, here's an example with a /boot and / filesystem:

    $ sudo fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }'
    /dev/nvme0n1p2 999424
    /dev/nvme0n1p3 3904979087
    
  5. Substract 1 from 2:

    qalc> set precision 100
    qalc> 3904979087 - 65536
    

    Or, last step and this one, in one line:

    fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 - 65536 }'
    
  6. Recheck filesystem:

    e2fsck -f /dev/nvme0n1p2
    
  7. Resize filesystem:

    resize2fs /dev/nvme0n1p2 $(fdisk -l /dev/nvme0n1 | awk '/nvme0n1p2/ { print $4 - 65536 }')s
    

    Notice the trailing s here: it makes resize2fs interpret the number as a 512 byte sector size, as opposed to the default (4k blocks).

  8. Re-encrypt filesystem:

    cryptsetup reencrypt --encrypt /dev/nvme0n1p2 --redice-device-size=32M
    

    This is it! This is the most important step! Make sure your laptop is plugged in and try not to interrupt it. This can, apparently, be resumed without problem, but I'd hate to show you how.

    This will show progress information like:

    Progress:   2.4% ETA 23m45s,      53GiB written, speed   1.3 GiB/s
    

    Wait until the ETA has passed.

  9. Open and mount the encrypted filesystem and mount the EFI system partition (ESP):

    cryptsetup open /dev/nvme0n1p2 crypt
    mount /dev/mapper/crypt /mnt
    mount /dev/nvme0n1p1 /mnt/boot/efi
    

    If this fails, now is the time to consider restoring from backups.

  10. Enter the chroot

    for fs in proc sys dev ; do
      mount --bind /$fs /mnt/$fs
    done
    chroot /mnt
    

    Pro tip: this can be done in one step in GRML with:

    grml-chroot /mnt bash
    
  11. Generate a crypttab:

    echo crypt_dev_nvme0n1p2 UUID=$(blkid -o value -s UUID /dev/nvme0n1p2) none luks,discard >> /etc/crypttab
    
  12. Adjust root filesystem in /etc/fstab, make sure you have a line like this:

    /dev/mapper/crypt_dev-nvme0n1p2 /               ext4    errors=remount-ro 0       1
    

    If you were already using a UUID entry for this, there's nothing to change!

  13. Configure the root filesystem in the initrd:

    echo root=/dev/mapper/crypt_dev_nvme0n1p2 > /etc/kernel/cmdline
    
  14. Regenerate UKI:

    dpkg-reconfigure linux-image-$(uname -r)
    

    Be careful here! systemd-boot inherits the command line from the system where it is generated, so this will possibly feature some unsupported commands from your boot environment. In my case GRML had a couple of those, which broke the boot. It's still possible to workaround this issue by tweaking the arguments at boot time, that said.

  15. Exit chroot and reboot

    exit
    reboot
    

Some of the ideas in this section were taken from this guide but was mostly rewritten to simplify the work. My guide also avoids the grub hacks or a specific initrd system (as the guide uses initramfs-tools and grub, while I, above, switched to dracut and systemd-boot). RHEL also has a similar guide, perhaps even better.

Somehow I have made this system without LVM at all, which simplifies things a bit (as I don't need to also resize the physical volume/volume groups), but if you have LVM, you need to tweak this to also resize the LVM bits. The RHEL guide has some information about this.

20 August, 2025 07:45PM

Sven Hoexter

Istio: Connect via a VirtualService to External IP Addresses

Rant - I've a theory about istio: It feels like a software designed by people who hate the IT industry and wanted revenge. So they wrote a software with so many odd points of traffic interception (e.g. SNI based traffic re-routing) that's completely impossible to debug. If you roll that out into an average company you completely halt the IT operations for something like a year.

On topic: I've two endpoints (IP addresses serving HTTPS on a none standard port) outside of kubernetes, and I need some rudimentary balancing of traffic. Since istio is already here one can levarage that, combining the resource kinds ServiceEntry, DestinationRule and VirtualService to publish a service name within the istio mesh. Since we do not have host names and DNS for those endpoint IP addresses we need to rely on istio itself to intercept the DNS traffic and deliver a virtual IP address to access the service. The sample given here leverages the exportTo configuration to make the service name only available in the same namespace. If you need broader access remove or adjust that. As usual in kubernetes you can resolve the name also as FQDN, e.g. acme-service.mynamespace.svc.cluster.local.

---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: acme-service
spec:
  hosts:
    - acme-service
  ports:
    - number: 12345
      name: acmeglue
      protocol: HTTPS
  resolution: STATIC
  location: MESH_EXTERNAL
  # limit the availability to the namespace this resource is applied to
  # if you need cross namespace access remove all the `exportTo`s in here
  exportTo:
    - "."
  # use `endpoints:` in this setup, `addreses:` did not work
  endpoints:
    # region1
    - address: 192.168.0.1
      ports:
        acmeglue: 12345
    # region2
     - address: 10.60.48.50
       ports:
        acmeglue: 12345
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: acme-service
spec:
  host: acme-service
  # limit the availability to the namespace this resource is applied to
  exportTo:
    - "."
  trafficPolicy:
    loadBalancer:
      simple: LEAST_REQUEST
    connectionPool:
      tcp:
        tcpKeepalive:
          # We have GCP service attachments involved with a 20m idle timeout
          # https://cloud.google.com/vpc/docs/about-vpc-hosted-services#nat-subnets-other
          time: 600s
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: acme-service
spec:
  hosts:
    - acme-service
  # limit the availability to the namespace this resource is applied to
  exportTo:
    - "."
  http:
  - route:
    - destination:
        host: acme-service
    retries:
      attempts: 2
      perTryTimeout: 2s
      retryOn: connect-failure,5xx
---
# Demo Deployment, istio configuration is the important part
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foobar
  labels:
    app: foobar
spec:
  replicas: 1
  selector:
    matchLabels:
      app: foobar
  template:
    metadata:
      labels:
        app: foobar
        # enable istio sidecar
        sidecar.istio.io/inject: "true"
      annotations:
        # Enable DNS capture and interception, IP resolved will be in 240.240/16
        # If you use network policies you've to allow egress to this range.
        proxy.istio.io/config: |
          proxyMetadata:
            ISTIO_META_DNS_CAPTURE: "true"
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Now we can exec into the deployed pod, do something like curl -vk https://acme-service:12345, and it will talk to one of the endpoints defined in the ServiceEntry via an IP address out of the 240.240/16 Class E network.

Documentation
https://istio.io/latest/docs/reference/config/networking/virtual-service/
https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution
https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings-SimpleLB
https://istio.io/latest/docs/ops/configuration/traffic-management/dns-proxy/#sidecar-mode

20 August, 2025 03:56PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.6.3-1 on CRAN: Minor Upstream Bug Fixes

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1268 other packages on CRAN, downloaded 41 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 642 times according to Google Scholar.

Conrad made three minor bug fix releases since the 4.6.0 release last month. We need to pace releases at CRAN so we do not immediately upload there on each upstream release—and then CRAN also had the usual (and well-deserved) summer rest leading to a slight delay relative to the last upstream. The minor changes in the three releases are summarized below. All our releases are always available via the GitHub repo and hence also via r-universe, and still rigorously tested via our own reverse-dependency checks. We also note that the package once again passed with flying colours and no human intervention which remains impressive given the over 1200 reverse dependencies.

Changes in RcppArmadillo version 14.6.3-1 (2025-08-14)

  • Upgraded to Armadillo release 14.6.3 (Caffe Mocha)

    • Fix OpenMP related crashes in Cube::slice() on Arm64 CPUs

Changes in RcppArmadillo version 14.6.2-1 (2025-08-08) (GitHub Only)

  • Upgraded to Armadillo release 14.6.2 (Caffe Mocha)

    • Fix for corner-case speed regression in sum()

    • Better handling of OpenMP in omit_nan() and omit_nonfinite()

Changes in RcppArmadillo version 14.6.1-1 (2025-07-21) (GitHub Only)

  • Upgraded to Armadillo release 14.6.1 (Caffe Mocha)

    • Fix for speed regression in mean()

    • Fix for detection of compiler configuration

    • Use of pow optimization now optional

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

20 August, 2025 02:31PM

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Benchmarking 3D graphic cards and their drivers

I have in the past benchmarked network links and disks, so as to have a rough idea of the performance of the hardware I am confronted at $WORK. As I started to dabble into Linux gaming (on non-PC hardware !), I wanted to have some numbers from the graphic stack as well.

I am using the command glmark2 --size 1920x1080 which is testing the performance of an OpenGL implementation, hardware + drivers. OpenGL is the classic 3D API used by most opensource gaming on Linux (Doom3 Engine, SuperTuxCart, 0AD, Cube 2 Engine).

Vulkan is getting traction as a newer 3D API however the equivalent Vulkan vkmark benchmark was crashing using the NVIDIA semi-proprietary drivers. (vkmark --size 1920x1080 was throwing an ugly Error: Selected present mode Mailbox is not supported by the used Vulkan physical device. )

# apt install glmark2
$ lspci | grep -i vga # integrated GPU
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 615 (rev 02)
$ glmark2 --size 1920x1080
...
...
glmark2 Score: 2063
$ lspci | grep -i vga # integrated GPU
00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
glmark2 Score: 3095
$ lspci | grep -i vga # discrete GPU, using nouveau
0000:01:00.0 VGA compatible controller: NVIDIA Corporation AD107GL [RTX 2000 / 2000E Ada Generation] (rev a1)
glmark2 Score: 2463
$ lspci | grep -i vga # discrete GPU, using nvidia-open semi-proprietary driver
0000:01:00.0 VGA compatible controller: NVIDIA Corporation AD107GL [RTX 2000 / 2000E Ada Generation] (rev a1)
glmark2 score: 4960

Nouveau has currently some graphical glitches with Doom3 so I am using the nvidia-open driver for this hardware.

In my testing with Doom3 and SuperTuxKart, post 2015 integrated Intel Hardware is more than enough to play in HD resolution.

20 August, 2025 08:52AM by Manu

Reproducible Builds

Reproducible Builds summit 2025 to take place in Vienna

We are extremely pleased to announce the upcoming Reproducible Builds summit, which will take place from October 28th—30th 2025 in the historic city of Vienna, Austria.

This year, we are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Hamburg (2023—2024), Venice (2022), Marrakesh (2019), Paris (2018), Berlin (2017), Berlin (2016) and Athens (2015).

If you’re excited about joining us this year, please make sure to read the event page which has more details about the event and location. As in previous years, we will be sending invitations to all those who attended our previous summit events or expressed interest to do so. However, even if you do not receive a personal invitation, please do email the organizers and we will find a way to accommodate you.

About the event

The Reproducible Builds Summit is a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

With your help, we will bring this (and several other areas) into life:


The main seminar room.

Schedule

Although the exact content of the meeting will be shaped by the participants, the main goals will include:

  • Update & exchange about the status of reproducible builds in various projects.
  • Improve collaboration both between and inside projects.
  • Expand the scope and reach of reproducible builds to more projects.
  • Work together and hack on solutions.
  • Establish space for more strategic and long-term thinking than is possible in virtual channels.
  • Brainstorm designs on tools enabling users to get the most benefits from reproducible builds.
  • Discuss how reproducible builds will be usable and meaningful to users and developers alike.

Logs and minutes will be published after the meeting.

Location & date

Registration instructions

Please reach out if you’d like to participate in hopefully interesting, inspiring and intense technical sessions about reproducible builds and beyond!

We look forward to what we anticipate to be yet another extraordinary event!

20 August, 2025 12:00AM

August 19, 2025

Russell Coker

Colmi P80 SmartWatch First Look

I just bought a Colmi P80 SmartWatch from Aliexpress for $26.11 based on this blog post reviewing it [1]. The main things I was after in this was a larger higher resolution screen because my vision has apparently deteriorated during the time I’ve been wearing a Pinetime [2] and I now can’t read messages on it when not wearing my reading glasses.

The watch hardware is quite OK. It has a larger and higher resolution screen and looks good. The review said that GadgetBridge (the FOSS SmartWatch software in the F-Droid repository) connected when told that the watch was a P79 and in a recent release got support for sending notifications. In my tests with GadgetBridge it doesn’t set the time, can’t seem to send notifications, can’t read the battery level, and seems not to do anything other than just say “connected”. So I installed the proprietary app, as an aside it’s a neat feature to have the watch display a QR code for installing the app, maybe InfiniTime should have a similar QR code for getting GadgetBridge from the F-Droid repository.

The proprietary app is quote OK for the basic functionality and a less technical relative who is using one is happy. For my use the proprietary app is utterly broken. One of my main uses is to get notifications of Jabber messages from the Conversations app (that’s in F-Droid). I have Conversations configured to always have a notification of how many accounts are connected which prevents Android from killing it, with GadgetBridge that notification isn’t reported but the actual message contents are (I don’t know how/why that happens) but with the Colmi app I get repeated notifcation messages on the watch about the accounts being connected. Also the proprietary app has on/off settings for messages to go to the watch for a hard coded list of 16 common apps and an “Others” setting for the rest. GadgetBridge lists the applications that are actually installed so I can configure it not to notify me about Reddit, connecting to my car audio, and many other less common notifications. I prefer the GadgetBridge option to have an allow-list for apps that I want notifications from but it also has a configuration option to use a deny list so you could have everything other than the app that gives lots of low value notifications. The proprietary app has a wide range of watch faces that it can send to the watch which is a nice feature that would be good to have in InfiniTime and GadgetBridge.

The P80 doesn’t display a code on screen when it is paired via Bluetooth so if you have multiple smart watches then you are at risk of connecting to the wrong one and there doesn’t seem to be anything stopping a hostile party from connecting to one. Note that hostile parties are not restricted to the normal maximum transmission power and can use a high gain antenna for reception so they can connect from longer distances than normal Bluetooth devices.

Conclusion

The Colmi P80 hardware is quite decent, the only downside is that the vibration has an annoying “tinny” feel. Strangely it has a rotation sensor for a rotating button (similar to analogue watches) but doesn’t seem to have a use for it as the touch screen does everything.

The watch firmware is quite OK (not great but adequate) but lacking a password for pairing is a significant lack.

The Colmi Android app has some serious issues that make it unusable for what I do and the release version of GadgetBridge doesn’t work with it, so I have gone back to the PineTime for actual use.

The PineTime cost twice as much, has less features (no sensor for O2 level in blood), but seems more solidly constructed.

I plan to continue using the P80 with GadgetBridge and Debian based SmartWatch software to help develop the Debian Mobile project. I expect that at some future time GadgetBridge and the programs written for non-Android Linux distributions will support the P80 and I will transition to it. I am confident that it will work well for me at some future time and that I will get $26.11 of value from it. At this time I recommend that people who do the sort of things I do get one of each and that less technical people get a Colmi P80.

19 August, 2025 10:31AM by etbe

August 18, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Amiga redux

Matthew blogged about his Amiga CDTV project, a truly unique Amiga hack which also manages to be a novel Doom project (no mean feat: it's a crowded space)

This re-awakened my dormant wish to muck around with my childhood Amiga some more. When I last wrote about it (four years ago ☹) I'd upgraded the disk drive emulator with an OLED display and rotary encoder. I'd forgotten to mention I'd also sourced a modern trapdoor RAM expansion which adds 2MiB of RAM. The Amiga can only see 1.5MiB1 of it at the moment, I need perform a mainboard modification to access the final 512kiB2, which means some soldering.

[Amiga Test Kit](https://github.com/keirf/Amiga-Stuff) showing 2MiB RAM

Amiga Test Kit showing 2MiB RAM

What I had planned to do back then: replace the switch in the left button of the original mouse, which was misbehaving; perform the aformentioned mainboard mod; upgrade the floppy emulator wiring to a ribbon cable with plug-and-socket, for easier removal; fit an RTC chip to the RAM expansion board to get clock support in the OS.

However much of that might be might be moot, because of two other mods I am considering,

PiStorm

I've re-considered the PiStorm accelerator mentioned in Matt's blog.

Four years ago, I'd passed over it, because it required you to run Linux on a Raspberry Pi, and then an m68k emulator as a user-space process under Linux. I didn't want to administer another Linux system, and I'm generally uncomfortable about using a regular Linux distribution on SD storage over the long term.

However in the intervening years Emu68, a bare-metal m68k emulator has risen to prominence. You boot the Pi straight into Emu68 without Linux in the middle. For some reason that's a lot more compelling to me.

The PiStorm enormously expands the RAM visible to the Amiga. There would be no point in doing the mainboard mod to add 512k (and I don't know how that would interact with the PiStorm). It also can provide virtual hard disk devices to the Amiga (backed by files on the SD card), meaning the floppy emulator would be superfluous.

Denise Mainboard

I've just learned about a truly incredible project: the Denise Mini-ITX Amiga mainboard. It fitss into a Mini-ITX case (I have a suitable one spare already). Some assembly required. You move the chips from the original Amiga over to the Denise mainboard. It's compatible with the PiStorm (or vice-versa). It supports PC-style PS/2 keyboards (I have a Model M in the loft, thanks again Simon) and has a bunch of other modern conveniences: onboard RTC; mini-ITX power (I'll need something like a picoPSU too)

It wouldn't support my trapdoor RAM card but it takes a 72-pin DIMM which can supply 2MiB of Chip RAM, and the PiStorm can do the rest (they're compatible3).

No stock at the moment but if I could get my hands on this, I could build something that could permanently live on my desk.


  1. the Boobip board's 1.5MiB is "chip" RAM: accessible to the other chips on the mainboard, with access mediated by the AGNUS chip.
  2. the final 512kiB is "Fast" RAM: only accessible to the CPU, not mediated via Agnus.
  3. confirmation

18 August, 2025 05:52AM

hackergotchi for Otto Kekäläinen

Otto Kekäläinen

Best Practices for Submitting and Reviewing Merge Requests in Debian

Featured image of post Best Practices for Submitting and Reviewing Merge Requests in Debian

Historically the primary way to contribute to Debian has been to email the Debian bug tracker with a code patch. Now that 92% of all Debian source packages are hosted at salsa.debian.org — the GitLab instance of Debian — more and more developers are using Merge Requests, but not necessarily in the optimal way. In this post I share what I’ve found the best practice to be, presented in the natural workflow from forking to merging.

Why use Merge Requests?

Compared to sending patches back and forth in email, using a git forge to review code contributions brings several benefits:

  • Contributors can see the latest version of the code immediately when the maintainer pushes it to git, without having to wait for an upload to Debian archives.
  • Contributors can fork the development version and easily base their patches on the correct version and help test that the software continues to function correctly at that specific version.
  • Both maintainer and other contributors can easily see what was already submitted and avoid doing duplicate work.
  • It is easy for anyone to comment on a Merge Request and participate in the review.
  • Integrating CI testing is easy in Merge Requests by activating Salsa CI.
  • Tracking the state of a Merge Request is much easier than browsing Debian bug reports tagged ‘patch’, and the cycle of submit → review → re-submit → re-review is much easier to manage in the dedicated Merge Request view compared to participants setting up their own email plugins for code reviews.
  • Merge Requests can have extra metadata, such as ‘Approved’, and the metadata often updates automatically, such as a Merge Request being closed automatically when the Git commit ID from it is pushed to the target branch.

Keeping these benefits in mind will help ensure that the best practices make sense and are aligned with maximizing these benefits.

Finding the Debian packaging source repository and preparing to make a contribution

Before sinking any effort into a package, start by checking its overall status at the excellent Debian Package Tracker. This provides a clear overview of the package’s general health in Debian, when it was last uploaded and by whom, and if there is anything special affecting the package right now. This page also has quick links to the Debian bug tracker of the package, the build status overview and more. Most importantly, in the General section, the VCS row links to the version control repository the package advertises. Before opening that page, note the version most recently uploaded to Debian. This is relevant because nothing in Debian currently enforces that the package in version control is actually the same as the latest uploaded to Debian.

Packaging source code repository links at tracker.debian.org

Following the Browse link opens the Debian package source repository, which is usually a project page on Salsa. To contribute, start by clicking the Fork button, select your own personal namespace and, under Branches to include, pick Only the default branch to avoid including unnecessary temporary development branches.

View after pressing Fork

Once forking is complete, clone it with git-buildpackage. For this example repository, the exact command would be gbp clone --verbose git@salsa.debian.org:otto/glow.git.

Next, add the original repository as a new remote and pull from it to make sure you have all relevant branches. Using the same fork as an example, the commands would be:

git remote add go-team https://salsa.debian.org/go-team/packages/glow.git
gbp pull --verbose --track-missing go-team

The gbp pull command can be repeated whenever you want to make sure the main branches are in sync with the original repository. Finally, run gitk --all & to visually browse the Git history and note the various branches and their states in the two remotes. Note the style in comments and repository structure the project has and make sure your contributions follow the same conventions to maximize the chances of the maintainer accepting your contribution.

It may also be good to build the source package to establish a baseline of the current state and what kind of binaries and .deb packages it produces. If using Debcraft, one can simply run debcraft build in the Git repository.

Submitting a Merge Request for a Debian packaging improvement

Always start by making a development branch by running git checkout -b <branch name> to clearly separate your work from the main branch.

When making changes, remember to follow the conventions you already see in the package. It is also important to be aware of general guidelines on how to make good Git commits.

If you are not able to immediately finish coding, it may be useful to publish the Merge Request as a draft so that the maintainer and others can see that you started working on something and what general direction your change is heading in.

If you don’t finish the Merge Request in one sitting and return to it another day, you should remember to pull the Debian branch from the original Debian repository in case it has received new commits. This can be done easily with these commands (assuming the same remote and branch names as in the example above):

git fetch go-team
git rebase -i go-team/debian/latest

Frequent rebasing is a great habit to help keep the Git history linear, and restructuring and rewording your commits will make the Git history easier to follow and understand why the changes were made.

When pushing improved versions of your branch, use git push --force. While GitLab does allow squashing, I recommend against it. It is better that the submitter makes sure the final version is a neat and clean set of commits that the receiver can easily merge without having to do any rebasing or squashing themselves.

When ready, remove the draft status of the Merge Request and wait patiently for review. If the maintainer does not respond in several days, try sending an email to <source package name>@packages.debian.org, which is the official way to contact maintainers. You could also post a comment on the MR and tag the last few committers in the same repository so that a notification email is triggered. As a last resort, submit a bug report to the Debian bug tracker to announce that a Merge Request is pending review. This leaves a permanent record for posterity (or the Debian QA team) of your contribution. However, most of the time simply posting the Merge Request in Salsa is enough; excessive communication might be perceived as spammy, and someone needs to remember to check that the bug report is closed.

Respect the review feedback, respond quickly and avoid Merge Requests getting stale

Once you get feedback, try to respond as quickly as possible. When people participating have everything fresh in their minds, it is much easier for the submitter to rework it and for the reviewer to re-review. If the Merge Request becomes stale, it can be challenging to revive it. Also, if it looks like the MR is only waiting for re-review but nothing happens, re-read the previous feedback and make sure you actually address everything. After that, post a friendly comment where you explicitly say you have addressed all feedback and are only waiting for re-review.

Reviewing Merge Requests

This section about reviewing is not exclusive to Debian package maintainers — anyone can contribute to Debian by reviewing open Merge Requests. Typically, the larger an open source project gets, the more help is needed in reviewing and testing changes to avoid regressions, and all diligently done work is welcome. As the famous Linus quote goes, “given enough eyeballs, all bugs are shallow”.

On salsa.debian.org, you can browse open Merge Requests per project or for a whole group, just like on any GitLab instance.

Reviewing Merge Requests is, however, most fun when they are fresh and the submitter is active. Thus, the best strategy is to ensure you have subscribed to email notifications in the repositories you care about so you get an email for any new Merge Request (or Issue) immediately when posted.

Change notification settings from Global to Watch to get an email on new Merge Requests

When you see a new Merge Request, try to review it within a couple of days. If you cannot review in a reasonable time, posting a small note that you intend to review it later will feel better to the submitter compared to not getting any response.

Personally, I have a habit of assigning myself as a reviewer so that I can keep track of my whole review queue at https://salsa.debian.org/dashboard/merge_requests?reviewer_username=otto, and I recommend the same to others. Seeing the review assignment happen is also a good way to signal to the submitter that their submission was noted.

Reviewing commit-by-commit in the web interface

Reviewing using the web interface works well in general, but I find that the way GitLab designed it is not ideal. In my ideal review workflow, I first read the Git commit message to understand what the submitter tried to do and why; only then do I look at the code changes in the commit. In GitLab, to do this one must first open the Commits tab and then click on the last commit in the list, as it is sorted in reverse chronological order with the first commit at the bottom. Only after that do I see the commit message and contents. Getting to the next commit is easy by simply clicking Next.

Example review to demonstrate location of buttons and functionality

When adding the first comment, I choose Start review and for the following remarks Add to review. Finally, I click Finish review and Submit review, which will trigger one single email to the submitter with all my feedback. I try to avoid using the Add comment now option, as each such comment triggers a separate notification email to the submitter.

Reviewing and testing on your own computer locally

For the most thorough review, I pull the code to my laptop for local review with git pull <remote url> <branch name>. There is no need to run git remote add as pulling using a URL directly works too and saves from needing to clean up old remotes later.

Pulling the Merge Request contents locally allows me to build, run and inspect the code deeply and review the commits with full metadata in gitk or equivalent.

Investing enough time in writing feedback, but not too much

See my other post for more in-depth advice on how to structure your code review feedback.

In Debian, I would emphasize patience, to allow the submitter time to rework their submission. Debian packaging is notoriously complex, and even experienced developers often need more feedback and time to get everything right. Avoid the temptation to rush the fix in yourself. In open source, Git credits are often the only salary the submitter gets. If you take the idea from the submission and implement it yourself, you rob the submitter of the opportunity to get feedback, try to improve and finally feel accomplished. Sure, it takes extra effort to give feedback, but the contributor is likely to feel ownership of their work and later return to further improve it.

If a submission looks hopelessly low quality and you feel that giving feedback is a waste of time, you can simply respond with something along the lines of: “Thanks for your contribution and interest in helping Debian. Unfortunately, looking at the commits, I see several shortcomings, and it is unlikely a normal review process is enough to help you finalize this. Please reach out to Debian Mentors to get a mentor who can give you more personalized feedback.”

There might also be contributors who just “dump the code”, ignore your feedback and never return to finalize their submission. If a contributor does not return to finalize their submission in 3-6 months, I will in my own projects simply finalize it myself and thank the contributor in the commit message (but not mark them as the author).

Despite best practices, you will occasionally still end up doing some things in vain, but that is how volunteer collaboration works. We all just need to accept that some communication will inevitably feel like wasted effort, but it should be viewed as a necessary investment in order to get the benefits from the times when the communication led to real and valuable collaboration. Please just do not treat all contributors as if they are unlikely to ever contribute again; otherwise, your behavior will cause them not to contribute again. If you want to grow a tree, you need to plant several seeds.

Approving and merging

Assuming review goes well and you are ready to approve, and if you are the only maintainer, you can proceed to merge right away. If there are multiple maintainers, or if you otherwise think that someone else might want to chime in before it is merged, use the “Approve” button to show that you approve the change but leave it unmerged.

The person who approved does not necessarily have to be the person who merges. The point of the Merge Request review is not separation of duties in committing and merging — the main purpose of a code review is to have a different set of eyeballs looking at the change before it is committed into the main development branch for all eternity. In some packages, the submitter might actually merge themselves once they see another developer has approved. In some rare Debian projects, there might even be separate people taking the roles of submitting, approving and merging, but most of the time these three roles are filled by two people either as submitter and approver+merger or submitter+merger and approver.

If you are not a maintainer at all and do not have permissions to click Approve, simply post a comment summarizing your review and that you approve it and support merging it. This can help the maintainers review and merge faster.

Making a Merge Request for a new upstream version import

Unlike many other Linux distributions, in Debian each source package has its own version control repository. The Debian sources consist of the upstream sources with an additional debian/ subdirectory that contains the actual Debian packaging. For the same reason, a typical Debian packaging Git repository has a debian/latest branch that has changes only in the debian/ subdirectory while the surrounding upstream files are the actual upstream files and have the actual upstream Git history. For details, see my post explaining Debian source packages in Git.

Because of this Git branch structure, importing a new upstream version will typically modify three branches: debian/latest, upstream/latest and pristine-tar. When doing a Merge Request for a new upstream import, only submit one Merge Request for one branch: which means merging your new changes to the debian/latest branch.

There is no need to submit the upstream/latest branch or the pristine-tar branch. Their contents are fixed and mechanically imported into Debian. There are no changes that the reviewer in Debian can request the submitter to do on these branches, so asking for feedback and comments on them is useless. All review, comments and re-reviews concern the content of the debian/latest branch only.

It is not even necessary to use the debian/latest branch for a new upstream version. Personally, I always execute the new version import (with gbp import-orig --verbose --uscan) and prepare and test everything on debian/latest, but when it is time to submit it for review, I run git checkout -b import/$(dpkg-parsechangelog -SVersion) to get a branch named e.g. import/1.0.1 and then push that for review.

Reviewing a Merge Request for a new upstream version import

Reviewing and testing a new upstream version import is a bit tricky currently, but possible. The key is to use gbp pull to automate fetching all branches from the submitter’s fork. Assume you are reviewing a submission targeting the Glow package repository and there is a Merge Request from user otto’s fork. As the maintainer, you would run the commands:

git remote add otto https://salsa.debian.org/otto/glow.git
gbp pull --verbose otto

If there was feedback in the first round and you later need to pull a new version for re-review, running gbp pull --force will not suffice, and this trick of manually fetching each branch and resetting them to the submitter’s version is needed:

for BRANCH in pristine-tar upstream debian/latest
do
git checkout $BRANCH
git reset --hard origin/$BRANCH
git pull --force https://salsa.debian.org/otto/glow.git $BRANCH
done

Once review is done, either click Approve and let the submitter push everything, or alternatively, push all the branches you pulled locally yourself. In GitLab and other forges, the Merge Request will automatically be marked as Merged once the commit ID that was the head of the Merge Request is pushed to the target branch.

Please allow enough time for everyone to participate

When working on Debian, keep in mind that it is a community of volunteers. It is common for people to do Debian stuff only on weekends, so you should patiently wait for at least a week so that enough workdays and weekend days have passed for the people you interact with to have had time to respond on their own Debian time.

Having to wait may feel annoying and disruptive, but try to look at the upside: you do not need to do extra work simply while waiting for others. In some cases, that waiting can be useful thanks to the “sleep on it” phenomenon: when you yourself look at your own submission some days later with fresh eyes, you might notice something you overlooked earlier and improve your code change even without other people’s feedback!

Contribute reviews!

The last but not least suggestion is to make a habit of contributing reviews to packages you do not maintain. As we already see in large open source projects, such as the Linux kernel, they have far more code submissions than they can handle. The bottleneck for progress and maintaining quality becomes the reviews themselves.

For Debian, as an organization and as a community, to be able to renew and grow new contributors, we need more of the senior contributors to shift focus from merely maintaining their packages and writing code to also intentionally interact with new contributors and guide them through the process of creating great open source software. Reviewing code is an effective way to both get tangible progress on individual development items and to transfer culture to a new generation of developers.

Why aren’t 100% of all Debian source packages hosted on Salsa?

As seen at trends.debian.net, more and more packages are using Salsa. Debian does not, however, have any policy about it. In fact, the Debian Policy Manual does not even mention the word “Salsa” anywhere. Adoption of Salsa has so far been purely organic, as in Debian each package maintainer has full freedom to choose whatever preferences they have regarding version control.

I hope the trend to use Salsa will continue and more shared workflows emerge so that collaboration gets easier. To drive the culture of using Merge Requests and more, I drafted the Debian proposal DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are active in Debian and you think DEP-18 is beneficial for Debian, please give a thumbs up at dep-team/deps!21.

18 August, 2025 12:00AM

August 17, 2025

hackergotchi for C.J. Collier

C.J. Collier

The Very Model of a Patriot Online

It appears that the fragile masculinity tech evangelists have identified Debian as a community with boundaries which exclude them from abusing its members and they’re so angry about it! In response to posts such as this, and inspired by Dr. Conway’s piece, I’ve composed a poem which, hopefully, correctly addresses the feelings of that crowd.


The Very Model of a Patriot Online

I am the very model of a modern patriot online,
My keyboard is my rifle and my noble cause is so divine.
I didn't learn my knowledge in a dusty college lecture hall,
But from the chans where bitter anonymity enthralls us all.
I spend a dozen hours every day upon my sacred quest,
To put the globo-homo narrative completely to the test.
My arguments are peer-reviewed by fellas in the comments section,
Which proves my every thesis is the model of complete perfection.
I’m steeped in righteous anger that the libs call 'white fragility,'
For mocking their new pronouns and their lack of masculinity.
I’m master of the epic troll, the comeback, and the searing snark,
A digital guerrilla who is fighting battles in the dark.

I know the secret symbols and the dog-whistles historical,
From Pepe the Frog to ‘Let’s Go Brandon,’ in order categorical;
In short, for fighting culture wars with rhetoric rhetorical,
I am the very model of a patriot polemical.

***

I stand for true expression, for the comics and the edgy clown,
Whose satire is too based for all the fragile folks in town.
They say my speech is 'violence' while my spirit they are trampling,
The way they try to silence me is really quite a startling sampling
Of 1984, which I've not read but thoroughly understand,
Is all about the tyranny that's gripping this once-blessed land.
My humor is a weapon, it’s a razor-bladed, sharp critique,
(Though sensitive elites will call my masterpiece a form of ‘hate speech’).
They cannot comprehend my need for freedom from all consequence,
They call it 'hate,' I call it 'jokes,' they just don't have a lick of sense.
So when they call me ‘bigot’ for the spicy memes I post pro bono,
I tell them their the ones who're cancelled, I'm the victim here, you know!

Then I can write a screed against the globalist cabal, you see,
And tell you every detail of their vile conspiracy.
In short, when I use logic that is flexible and personal,
I am the very model of a patriot controversial.

***

I'm very well acquainted with the scientific method, too,
It's watching lengthy YouTube vids until my face is turning blue.
I trust the heartfelt testimony of a tearful, blonde ex-nurse,
But what a paid fact-checker says has no effect and is perverse.
A PhD is proof that you've been brainwashed by the leftist mob,
While my own research on a meme is how I really do my job.
I know that masks will suffocate and vaccines are a devil's brew,
I learned it from a podcast host who used to sell brain-boosting goo.
He scorns the lamestream media, the CNNs and all the rest,
Whose biased reporting I've put fully to a rigorous test
By only reading headlines and confirming what I already knew,
Then posting my analysis for other patriots to view.

With every "study" that they cite from sources I can't stand to hear,
My own profound conclusions become ever more precisely clear.
In short, when I've debunked the experts with a confident "Says who?!",
I am the very model of a researcher who sees right through you.

***

But all these culture wars are just a sleight-of-hand, a clever feint,
To hide the stolen ballots and to cover up the moral taint
Of D.C. pizza parlors and of shipping crates from Wayfair, it’s true,
It's all connected in a plot against the likes of me and you!
I've analyzed the satellite photography and watermarks,
I understand the secret drops, the cryptic Qs, the coded sparks.
The “habbening” is coming, friends, just give it two more weeks or three,
When all the traitors face the trials for their wicked treachery.
They say that nothing happened and the dates have all gone past, you see,
But that's just disinformation from the globalist enemy!
Their moving goalposts constantly, a tactic that is plain to see,
To wear us down and make us doubt the coming, final victory!

My mind can see the patterns that a simple sheep could never find,
The hidden puppet-masters who are poisoning our heart and mind.
In short, when I link drag queens to the price of gas and child-trafficking,
I am the very model of a patriot whose brain is quickening!

***

My pickup truck's a testament to everything that I hold dear,
With vinyl decals saying things the liberals all hate and fear.
The Gadsden flag is waving next to one that's blue and starkly thin,
To show my deep respect for law, except the feds who're steeped in sin.
There's Punisher and Molon Labe, so that everybody knows
I'm not someone to trifle with when push to final shoving goes.
I've got my tactical assault gear sitting ready in the den,
Awaiting for the signal to restore our land with my fellow men.
I practice clearing rooms at home when my mom goes out to the store,
A modern Minuteman who's ready for a civil war.
The neighbors give me funny looks, I see them whisper and take note,
They'll see what's what when I'm the one who's guarding checkpoints by their throat.

I am a peaceful man, of course, but I am also pre-prepared,
To neutralize the threats of which the average citizen's unscared.
In short, when my whole identity's a brand of tactical accessory,
You'll say a better warrior has never graced a Cabela's registry.

***

They say I have to tolerate a man who thinks he is a dame,
While feminists and immigrants are putting out my vital flame!
There taking all the jobs from us and giving them to folks who kneel,
And "woke HR" says my best jokes are things I'm not allowed to feel!
An Alpha Male is what I am, a lion, though I'm in this cubicle,
My life's frustrations can be traced to policies Talmudical.
They lecture me on privilege, I, who have to pay my bills and rent!
While they give handouts to the lazy, worthless, and incompetent!
My grandad fought the Nazis! Now I have to press a key for ‘one’
To get a call-rep I can't understand beneath the blazing sun
Of global, corporate tyranny that's crushing out the very soul
Of men like me, who've lost their rightful, natural, and just control!

So yes, I am resentful! And I'm angry! And I'm right to be!
They've stolen all my heritage and my masculinity!
In short, when my own failures are somebody else's evil plot,
I am the very model of the truest patriot we've got!

***

There putting chips inside of you! Their spraying things up in the sky!
They want to make you EAT THE BUGS and watch your very spirit die!
The towers for the 5G are a mind-control delivery tool!
To keep you docile while the children suffer in a grooming school!
The WEF, and Gates, and Soros have a plan they call the 'Great Reset,'
You'll own no property and you'll be happy, or you'll be in debt
To social credit overlords who'll track your every single deed!
There sterilizing you with plastics that they've hidden in the feed!
The world is flat! The moon is fake! The dinosaurs were just a lie!
And every major tragedy's a hoax with actors paid to cry!
I'M NOT INSANE! I SEE THE TRUTH! MY EYES ARE OPEN! CAN'T YOU SEE?!
YOU'RE ALL ASLEEP! YOU'RE COWARDS! YOU'RE AFRAID OF BEING TRULY FREE!

My heart is beating faster now, my breath is short, my vision's blurred,
From all the shocking truth that's in each single, solitary word!
I've sacrificed my life and friends to bring this message to the light, so...
You'd better listen to me now with all your concentrated might, ho!

***

For my heroic struggle, though it's cosmic and it's biblical,
Is waged inside the comments of a post that's algorithm-ical.
And still for all my knowledge that's both tactical and practical,
My mom just wants the rent I owe and says I'm being dramatical.

17 August, 2025 09:21AM by C.J. Collier

Valhalla's Things

rrdtool and Trixie

Posted on August 17, 2025
Tags: madeof:bits

TL;DL: if you’re using rrdtool on a 32 bit architecture like armhf make an XML dump of your RRD files just before upgrading to Debian Trixie.

I am an old person at heart, so the sensor data from my home monitoring system1 doesn’t go to one of those newfangled javascript-heavy data visualization platforms, but into good old RRD files, using rrdtool to generate various graphs.

This happens on the home server, which is an armhf single board computer2, hosting a few containers3.

So, yesterday I started upgrading one of the containers to Trixie, and luckily I started from the one with the RRD, because when I rebooted into the fresh system and checked the relevant service I found it stopped on ERROR: '<file>' is too small (should be <size> bytes).

Some searxing later, I’ve4 found this was caused by the 64-bit time_t transition, which changed the format of the files, and that (somewhat unexpectedly) there was no way to fix it on the machine itself.

What needed to be done instead was to export the data on an XML dump before the upgrade, and then import it back afterwards.

Easy enough, right? If you know about it, which is why I’m blogging this, so that other people will know in advance :)

Anyway, luckily I still had the other containers on bookworm, so I copied the files over there, did the upgrade, and my home monitoring system is happily running as before.


  1. of course one has a self-built home monitoring system, right?↩︎

  2. an A20-OLinuXino-MICRO, if anybody wants to know.↩︎

  3. mostly for ease of migrating things between different hardware, rather than insulation, since everything comes from Debian packages anyway.↩︎

  4. and by I I really mean Diego, as I was still into denial / distractions mode.↩︎

17 August, 2025 12:00AM

August 16, 2025

hackergotchi for Bits from Debian

Bits from Debian

Debian turns 32!

Alt 32th Debian Day by Daniel Lenharo

On August 16, 1993, Ian Murdock announced the Debian Project to the world. Three decades (and a bit) later, Debian is still going strong, built by a worldwide community of developers, contributors, and users who believe in a free, universal operating system.

Over the years, Debian has powered servers, desktops, tiny embedded devices, and huge supercomputers. We have gathered at DebConfs, squashed countless bugs, shared late-night hacking sessions, and helped keep millions of systems secure.

Debian Day is a great excuse to get together, whether it is a local meetup, an online event, a bug squashing party, a team sprint or just coffee with fellow Debianites. Check out the Debian Day wiki to see if there is a celebration near you or to add your own.

Here is to 32 years of collaboration, code, and community, and to all the amazing people who make Debian what it is.

Happy Debian Day!

16 August, 2025 09:15AM by Debian Publicity Team

Birger Schacht

Updates and additions in Debian 13 Trixie

Last week Debian 13 (Trixie) was released and there have been some updates and additions in the packages that I maintain, that I wanted to write about. I think they are not worth of being added to the release notes, but I still wanted to list some of the changes and some of the new packages.

sway

Sway, the tiling Wayland compositor was version 1.7 in Bookworm. It was updated to version 1.10 (and 1.11 is already in experimental and waiting for an upload to unstable). This new version of sway brings, among a lot of other features, updated support for touchpad gestures and support for the ext-session-lock-v1 protocol, which allows for more robust and secure screen locking. The configuration snippet that activates the default sway background is now shipped in the sway-backgrounds package instead of being part of the sway package itself.

The default menu application was changed from dmenu to wmenu. wmenu is a Wayland native alternative to dmenu which I packaged and it is now recommended by sway.

There are some small helper tools for sway that were updated: swaybg was bumped from 1.2.0 to 1.2.1, swaylock was bumped from 1.7.2 to 1.8.2.

The grimshot script, which is a script for making screenshots, was part of the sway’s contrib folder for a long time (but was shipped as a separate binary package). It was removed from sway and is now part of the sway-contrib project. There are some other useful utilities in this source package that I might package in the future.

slurp, which is used by grimshot to select a region, was updated from version 1.4 to version 1.5.

labwc

I uploaded the first labwc package two years ago and I’m happy it is now part of a stable Debian release. Labwc is also based on wlroots, like sway. It is a window-stacking compositor and is inspired by openbox. I used openbox for a long time back in the day before I moved to i3 and I’m very happy to see that there is a Wayland alternative.

foot

Foot is a minimalistic and fast Wayland terminal emulator. It is mostly keyboard driven. foot was updated from version 1.13.1 to 1.21.0. The probably most important change for users updating might be the fact that:

  • Control+Shift+u is now bound to unicode-input instead of show- urls-launch, to follow the convention established in GTK and Qt
  • show-urls-launch now bound to Control+Shift+o

et cetera

The Wayland kiosk cage was updated from 0.1.4 to 0.2.0.

The waybar bar for wlroots compositors was updated from 0.9.17 to 0.12.0.

swayimg was updated from 1.10 to 3.8 and now brings support for custom key bindings, support for additional image types (PNM, EXR, DICOM, Farbfeld, sixel) and a gallery mode.

tofi, another dmenu replacement was updated from 0.8.1 to 0.9.1, wf-recorder a tool for screen recording in wlroots-based compositors, was updated from version 0.3 to version 0.5.0. wlogout was updated from version 1.1.1 to 1.2.2. The application launcher wofi was updated from 1.3 to 1.4.1. The lightweight status panel yambar was updated from version 1.9 to 1.11. kanshi, the tool for managing and automatically switching your output profiles, was updated from version 1.3.1 to version 1.5.1.

usbguard was updated from version 1.1.2 to 1.1.3.

added

  • fnott - a lightweight notification daemon for wlroots based compositors
  • fyi - a utility to send notifications to a notification daemon, similar to notify-send
  • pipectl - a tool to create and manage short-lived named pipes, this is a dependency of wl-present. wl-present is a script around wl-mirror which implements output mirroring for wlroots-based compositors
  • poweralertd - a small daemon that notifies you about the power status of your battery powered devices
  • wlopm - control power management of outputs
  • wlrctl - command line utility for miscellaneous wlroots Wayland extensions
  • wmenu - already mentioned, the new default launcher of sway
  • wshowkeys - shows keypresses in wayland sessions, nice for debugging
  • libsfdo - libraries implementing some freedesktop.org specs, used by labwc

16 August, 2025 05:28AM

August 15, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Abstract algebra structures made easy

Group theory, and abstract algebra in general, has many useful properties; you can take a bunch of really common systems and prove very useful statements that hold for all of them at once.

But sometimes in computer science, we just use the names, not really the theorems. If you're showing that something is a group) and then proceed to use Fermat's little theorem (perhaps to efficiently compute inverses, when it's not at all obvious what they would be), then you really can't go without the theory. But for some cases, we just love to be succinct in our description of things, and for outsiders, it's just… not useful.

So here's Steinar's easy (and more importantly, highly non-scientific; no emails about inaccuracies, please :-) ) guide to the most common abstract algebra structures:

  • Set: Hopefully you already know what this is. A collection of things (for instance numbers).
  • Semigroup: A (binary) operation that isn't crazy.
  • Monoid: An operation, but you also have a no-op.
  • Group: An operation, but you also have the opposite operation.
  • Abelian group: An operation, but the order doesn't matter.
  • Ring: Two operations; the Abelian group got a friend for Christmas. The extra operation might be kind of weird (for instance, has no-ops but might not always have opposites).
  • Field: A ring with some extra flexibility, so you can do almost whatever you are used to doing with “normal” (real) numbers except perhaps order them.

So for instance, assuming that x and y are e.g. positive integers (including zero), then max(x,y) (the motivating example for this post) is a monoid. Why? Because it's a non-crazy binary operation (in particular, max(max(x,y),z) = max(x,max(y,z))), and you can use x=0 or y=0 as a no-op (max(anything, 0) = anything). But it's not a group, because once you've done max(x,y), there's nothing you can max() with to get the smallest value back.

There are many more, but these are the ones you get today.

15 August, 2025 06:31PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, July 2025 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In July, 17 contributors have been paid to work on Debian LTS, their reports are available:

  • Adrian Bunk did 19.0h (out of 19.0h assigned).
  • Andrej Shadura did 5.0h (out of 0.0h assigned and 8.0h from previous period), thus carrying over 3.0h to the next month.
  • Bastien Roucariès did 18.5h (out of 18.75h assigned), thus carrying over 0.25h to the next month.
  • Ben Hutchings did 12.5h (out of 3.25h assigned and 15.5h from previous period), thus carrying over 6.25h to the next month.
  • Carlos Henrique Lima Melara did 10.0h (out of 10.0h assigned).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 18.75h (out of 17.25h assigned and 1.5h from previous period).
  • Emilio Pozuelo Monfort did 18.75h (out of 18.75h assigned).
  • Guilhem Moulin did 15.0h (out of 14.0h assigned and 1.0h from previous period).
  • Jochen Sprickerhof did 2.0h (out of 16.5h assigned and 2.25h from previous period), thus carrying over 16.75h to the next month.
  • Lee Garrett did 7.0h (out of 0.0h assigned and 23.25h from previous period), thus carrying over 16.25h to the next month.
  • Markus Koschany did 9.0h (out of 18.75h assigned), thus carrying over 9.75h to the next month.
  • Roberto C. Sánchez did 10.25h (out of 18.5h assigned and 2.75h from previous period), thus carrying over 11.0h to the next month.
  • Santiago Ruano Rincón did 7.25h (out of 12.75h assigned and 2.25h from previous period), thus carrying over 7.75h to the next month.
  • Sylvain Beucler did 18.75h (out of 18.75h assigned).
  • Thorsten Alteholz did 15.0h (out of 15.0h assigned).
  • Utkarsh Gupta did 15.0h (out of 1.0h assigned and 14.0h from previous period).

Evolution of the situation

In July, we released 24 DLAs.

  • Notable security updates:
    • angular.js, prepared by Bastien Roucariès, fixes multiple vulnerabilities including input sanitization and potential regular expression denial of service (ReDoS)
    • tomcat9, prepared by Markus Koschany, fixes an assortment of vulnerabilities
    • mediawiki, prepared by Guilhem Moulin, fixes several information disclosure and privilege escalation vulnerabilities
    • php7.4, prepared by Guilhem Moulin, fixes several server side request forgery and denial of service vulnerabilities

This month’s contributions from outside the regular team include an update to thunderbird, prepared by Christoph Goehre (the package maintainer).

LTS Team members also contributed updates of the following packages:

  • commons-beanutils (to stable and unstable), prepared by Adrian Bunk
  • djvulibre (to oldstable, stable, and unstable), prepared by Adrian Bunk
  • git (to stable), prepared by Adrian Bunk
  • redis (to oldstable), prepared by Chris Lamb
  • libxml2 (to oldstable), prepared by Guilhem Moulin
  • commons-vfs (to oldstable), prepared by Daniel Leidert

Additionally, LTS Team member Santiago Ruano Rincón proposed and implemented an improvement to the debian-security-support package. This package is available so that interested users can quickly determine if any installed packages are subject to limited security support or are excluded entirely from security support. However, there was not previously a way to identify explicitly supported packages, which has become necessary to note exceptions to broad exclusion policies (e.g., those which apply to substantial package groups, like modules belonging to the Go and Rust language ecosystems). Santiago’s work has enabled the notation of exceptions to these exclusions, thus ensuring that users of debian-security-support have accurate status information concerning installed packages.

DebCamp 25 Security Tracker Sprint

The previously announced security tracker sprint took place at DebCamp from 7-13 July. Participants included 8 members of the standing LTS Team, 2 active Debian Developers with an interest in LTS, 3 community members, and 1 member of the Debian Security Team (who provided guidance and reviews on proposed changes to the security tracker); participation was a mix of in person at the venue in Brest, France and remote. During the days of the sprint, the team tackled a wide range of bugs and improvements, mostly targeting the security tracker.

The sprint participants worked on the following items:

As can be seen from the above list, only a small number of changes were brought to completion during the sprint week itself. Given the very compressed timeframe involved, the broad scope of tasks which were under consideration, and the highly sensitive data managed by the security tracker, this is not entirely unexpected and in no way diminishes the great work done by the sprint participants. The LTS Team would especially like to thank Salvatore Bonaccorso of the Debian Security Team for making himself available throughout the sprint to answer questions, for providing guidance on the work, and for helping the work by reviewing and merging the MRs which were able to merged during the sprint itself.

In the weeks that follow the sprint, the team will continue working towards completing the in progress items.

Thanks to our sponsors

Sponsors that joined recently are in bold.

15 August, 2025 12:00AM by Roberto C. Sánchez

August 14, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 4: openWakeWord

People keep asking me when I’ll write the next instalment of my local voice assistant journey. I didn’t mean for it to be so long since the last one, things have been busier than I’d like. Anyway. Last time we’d built Tensorflow, so now it’s time to sort out openWakeWord. As a reminder we’re trying to put a local voice satellite on my living room Debian media machine.

The point of openWakeWord is to run on the machine the microphone is connected to, listening for the wake phrase (“Hey Jarvis” in my case), and only then calling back to the central server to do a speech to text operation. It’s wrapped up for Wyoming as wyoming-openwakeword.

Of course I’ve packaged it up - available at https://salsa.debian.org/noodles/wyoming-openwakeword. Trixie only released yesterday, so I’m still running all of this on bookworm. That means you need python3-wyoming from Trixie - 1.6.0-1 will install fine without needing rebuilt - and the python3-tflite-runtime we built last time.

Like the other pieces I’m not sure about how this could land in Debian; it’s unclear to me that the pre-trained models provided would be accepted in main.

As usual I start it with with a systemd unit file dropped in /etc/systemd/service/wyoming-openwakeword.service:

[Unit]
Description=Wyoming OpenWakeWord server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=/usr/bin/wyoming-openwakeword --uri tcp://[::1]:10400/ --preload-model 'hey_jarvis' --threshold 0.8

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

I’m still playing with the threshold level. It defaults to 0.5, but the device lives under the TV and seems to get a bit confused by it sometimes. There’s some talk about using speex for noise suppression, but I haven’t explored that yet (it’s yet another Python module to bind to the C libraries I’d have to look at).

This is a short one; next post is actually building the local satellite on top to tie everything together.

14 August, 2025 07:07PM

August 13, 2025

Sven Hoexter

Automated Browsing with Gemini and Chrome via BrowserMCP and gemini-cli

Brief dump so I don't forget how that worked in August 2025. Requires npm, npx and nodejs.

  1. Install Chrome
  2. Add the BrowserMCP extension
  3. Install gemini-cli npm install -g @google/gemini-cli
  4. Retrieve a Gemini API key via AI Studio
  5. Export API key for gemini-cli export GEMINI_API_KEY=2342
  6. Start BrowserMCP extension, see manual, an info box will appear that it's active with a cancel button.
  7. Add mcp server to gemini-cli gemini mcp add browsermcp npx @browsermcp/mcp@latest
  8. Start gemini-cli, let it use the mcp server and task it to open a website.

13 August, 2025 12:21PM

August 12, 2025

Sergio Cipriano

Running Docker (OCI) Images in Incus

Running Docker (OCI) Images in Incus

Incus 6.15 released with a lot of cool features, my favorite so far is the authentication support for OCI registries.

Here's an example:

$ incus remote add docker https://docker.io --protocol=oci
$ incus launch docker:debian:sid sid
$ incus shell sid
root@sid:~# apt update && apt upgrade -y
root@sid:~# cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux forky/sid"
NAME="Debian GNU/Linux"
VERSION_CODENAME=forky
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

This has been really useful for creating containers to test packages, much better than launching the official Debian stable Incus images and then manually changing the sources list.

12 August, 2025 02:18AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: DebConf 25, OpenSSH upgrades, Cross compilation collaboration and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-07

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25, by Stefano Rivera and Santiago Ruano Rincón

In July, DebConf 25 was held in Brest, France. Freexian was a gold sponsor and most of the Freexian team attended the event. Many fruitful discussions were had amongst our team and within the Debian community.

DebConf itself was organized by a local team in Brest, that included Santiago (who now lives in Uruguay). Stefano was also deeply involved in the organization, as a DebConf committee member, core video team, and the lead developer for the conference website. Running the conference took an enormous amount of work, consuming all of Stefano and Santiago’s time for most of July.

Lucas Kanashiro was active in the DebConf content team, reviewing talks and scheduling them. There were many last-minute changes to make during the event.

Anupa Ann Joseph was part of the Debian publicity team doing live coverage of DebConf 25 and was part of the DebConf 25 content team reviewing the talks. She also assisted the local team to procure the lanyards.

Recorded sessions presented by Freexian collaborators, often alongside other friends in Debian, included:

OpenSSH upgrades, by Colin Watson

Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported “No new SSH connections possible during large part of upgrade to Debian Trixie”, which would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:

  • As part of hardening the OpenSSH server, OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it; after this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen (roughly) in two phases: first we unpack the new files onto disk, and then we run some configuration steps which usually include things like restarting services. Normally this is fine, because the old service keeps on working until it’s restarted. In this case, unpacking the new files onto disk immediately stopped new SSH connections from working: the old sshd received the connection and tried to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this. This wasn’t much of a problem when upgrading OpenSSH on its own or with a small number of other packages, but in release upgrades it left a large gap when you can’t SSH to the system any more, and if anything fails in that interval then you could be in trouble.

    After trying a couple of other approaches, Colin landed on the idea of having the openssh-server package divert /usr/sbin/sshd to /usr/sbin/sshd.session-split before the unpack step of an upgrade from before 9.8, then removing the diversion and moving the new file into place once it’s ready to restart the service. This reduces the period when new connections fail to a minimum.

  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor part of the version number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, so as soon as you unpacked the new OpenSSL library during an upgrade, sshd stopped working. This couldn’t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL, and time was tight if we wanted this to be available before the release of Debian 13.

    Fortunately, there’s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted Colin’s proposal to fix this there.

The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine.

Cross compilation collaboration, by Helmut Grohne

Supporting cross building in Debian packages touches lots of areas of the archive and quite some of these matters reside in shared responsibility between different teams. Hence, DebConf was an ideal opportunity to settle long-standing issues.

The cross building bof sparked lively discussions as a significant fraction of developers employ cross builds to get their work done. In the trixie release, about two thirds of the packages can satisfy their cross Build-Depends and about half of the packages actually can be cross built.

Miscellaneous contributions

  • Raphaël Hertzog updated tracker.debian.org to remove references to Debian 10 which was moved to archive.debian.org, and had many fruitful discussions related to Debusine during DebConf 25.
  • Carles Pina prepared some data, questions and information for the DebConf 25 l10n and i18n BoF.
  • Carles Pina demoed and discussed possible next steps for po-debconf-manager with different teams in DebConf 25. He also reviewed Catalan translations and sent them to the packages.
  • Carles Pina started investigating a django-compressor bug: reproduced the bug consistently and prepared a PR for django-compressor upstream (likely more details next month). Looked at packaging frictionless-py.
  • Stefano Rivera triaged Python CVEs against pypy3.
  • Stefano prepared an upload of a new upstream release of pypy3 to Debian experimental (due to the freeze).
  • Stefano uploaded python3.14 RC1 to Debian experimental.
  • Thorsten Alteholz uploaded a new upstream version of sane-airscan to experimental. He also started to work on a new upstream version of hplip.
  • Colin backported fixes for CVE-2025-50181 and CVE-2025-50182 in python-urllib3, and fixed several other release-critical or important bugs in Python team packages.
  • Lucas uploaded ruby3.4 to experimental as a starting point for the ruby-defaults transition that will happen after Trixie release.
  • Lucas coordinated with the Release team the fix of the remaining RC bugs involving ruby packages, and got them all fixed.
  • Lucas, as part of the Debian Ruby team, kicked off discussions to improve internal process/tooling.
  • Lucas, as part of the Debian Outreach team, engaged in multiple discussions around internship programs we run and also what else we could do to improve outreach in the Debian project.
  • Lucas joined the Local groups BoF during DebConf 25 and shared all the good experiences from the Brazilian community and committed to help to document everything to try to support other groups.
  • Helmut spent significant time with Samuel Thibault on improving architecture cross bootstrap for hurd-any mostly reviewing Samuel’s patches. He proposed a patch for improving bash’s detection of its pipesize and a change to dpkg-shlibdeps to improve behavior for building cross toolchains.
  • Helmut reiterated the multiarch policy proposal with a lot of help from Nattie Mayer-Hutchings, Rhonda D’Vine and Stuart Prescott.
  • Helmut finished his work on the process based unschroot prototype that was the main feature of his talk (see above).
  • Helmut analyzed a multiarch-related glibc upgrade failure induced by a /usr-move mitigation of systemd and sent a patch and regression fix both of which reached trixie in time. Thanks to Aurelien Jarno and the release team for their timely cooperation.
  • Helmut resurrected an earlier discussion about changing the semantics of Architecture: all packages in a multiarch context in order to improve the long-standing interpreter problem. With help from Tollef Fog Heen better semantics were discovered and agreement was reached with Guillem Jover and Julian Andres Klode to consider this change. The idea is to record a concrete architecture for every Architecture: all package in the dpkg database and enable choosing it as non-native.
  • Helmut implemented type hints for piuparts.
  • Helmut reviewed and improved a patch set of Jochen Sprickerhof for debvm.
  • Anupa was involved in discussions with the Debian Women team during DebConf 25.
  • Anupa started working for the trixie release coverage and started coordinating release parties.
  • Emilio helped coordinate the release of Debian 13 trixie.

12 August, 2025 12:00AM by Anupa Ann Joseph

August 11, 2025

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 11/n

Context

Update to latest rockchip-devel

  • rebase reform-patches on top of collabora/rockchip-devel
  • pci reset series has new conflicts
  • try dropping from rebase and re-applying
$ b4 am 20250715-pci-port-reset-v6-0-6f9cce94e7bb@oss.qualcomm.com
# follow hint from b4
$ git checkout -b v6_20250715_manivannan_sadhasivam_oss_qualcomm_com 19272b37aa4f83ca52bdf9c16d5d81bdd1354494
$ git am ./v6_20250715_manivannan_sadhasivam_pci_add_support_for_resetting_the_root_ports_in_a_platform_specifi.mbx
$ git rebase -i collabora/rockchip-devel
  • conflict in pcie-qcom.c: take new version
  • conflict in pcie-dw-rockchip.c resolved as in hibernate-pocket-8

  • rebase reform patches on top of pci reset, instead of vice versa.

  • rebuild as discussed in hibernate-pocket-8

$ cp /boot/config-6.16.0-rc7+ .config
$ make olddefconfig
# this generates a message about "reform2_lpc config not found!!"
# and "rockchip_vdec2 config not found!!"
# hopefully this is ok
$ yes '' | make localmodconfig
$ make KBUILD_IMAGE=arch/arm64/boot/Image bindeb-pkg -j$(nproc)

previous episode

11 August, 2025 05:20PM

August 10, 2025

hackergotchi for Jonathan Carter

Jonathan Carter

Debian 13

Debian 13 has finally been released!

One of the biggest and under-hyped features is support for HTTP Boot. This allows you to simply specify a URL (to any d-i or live image iso) in your computer’s firmware setup and then you can boot to it directly over the Internet, so no need to download an image, write it to flash disk and then boot from the flash disk on computers made in the last ~5 years. This is also supported on the Tianocore free EFI firmware, which is useful if you’d like to try it out on QEMU/KVM.

More details about Debian 13 available on the official press release.

The default theme for Debian 13 is Ceratopsian, designed by Elise Couper. I’ll be honest, I wasn’t 100% sure it was the best choice when it won the artwork vote, but it really grew on me over the last few months, and it looked great in combination with all kinds of other things during DebConf too, so it has certainly won me over.

And I particularly like the Plymouth theme. It’s very minimal, and it reminds me of the Toy Story Trixie character, it’s almost like it helps explain the theme:

Plymouth (start-up/shutdown) theme.

Trixie, the character from Toy Story that was chosen as the codename for Debian 13.

Debian Local Team ISO testing

Yesterday we got some locals together for ISO testing and we got a cake with the wallpaper printed on it, along with our local team logo which has been a work in progress for the last 3 years, so hopefully we’ll finalise it this year! (it will be ready when it’s ready). It came out a lot bluer than the original wallpaper, but still tasted great.

For many releases, I’ve been the only person from South Africa doing ISO smoke-testing, and this time was quite different, since everyone else in the photo below tested an image except for me. I basically just provided some support and helped out with getting salsa/wiki accounts and some troubleshooting. It went nice and fast, and it’s always a big relief when there are no showstoppers for the release.

My dog was really wishing hard that the cake would slip off.

Packaging-wise, I only have one big new package for Trixie, and that’s Cambalache, a rapid application design UI builder for GTK3/GTK4.

The version in trixie is 0.94.1-3 and version 1.0 was recently released, so I’ll get that updated in forky and backport it if possible.

I was originally considering using Cambalache for an installer UI, but ended up going with a web front-end instead. But that’s moving firmly towards forky territory, so more on that another time!

Thanks to everyone who was involved in this release, so far upgrades have been very smooth!

10 August, 2025 02:53PM by jonathan

hackergotchi for C.J. Collier

C.J. Collier

Upgrading Proxmox 7 to 8

Some variant of the following[1] worked for me.

The first line is the start of a for loop that runs on each node in my cluster a command using ssh. The argument -t is passed to attach a controlling terminal to STDIN, STDERR and STDOUT of this session, since there will not be an intervening shell to do it for us. The argument to ssh is a workflow of bash commands. They upgrade the 7.x system to the most recent packages on the repository. We then update the sources.list entries for the system to point at bookworm sources instead of bullseye. The package cache is updated and the proxmox-ve package is installed. Packages which are installed are upgraded to the versions from bookworm, and the installer concludes.

Dear reader, you might be surprised how many times I saw the word “perl” scroll by during the manual, serial scrolling of this install. It took hours. There were a few prompts, so stand by the keyboard!

[1]

gpg: key 1140AF8F639E0C39: public key "Proxmox Bookworm Release Key " imported
# have your ssh agent keychain running and a key loaded that's installed at 
# ~root/.ssh/authorized_keys on each node 
apt-get install -y keychain
eval $(keychain --eval)
ssh-add ~/.ssh/id_rsa
# Replace the IP address prefix (100.64.79.) and  suffixes (64, 121-128)
# with the actual IPs of your cluster nodes.  Or use hostnames :-)
for o in 64 121 122 123 124 125 126 127 128 ; do   ssh -t root@100.64.79.$o '
  sed -i -e s/bullseye/bookworm/g /etc/apt/sources.list $(compgen -G "/etc/apt/sources.listd.d/*.list") \
  && echo "deb [signed-by=/usr/share/keyrings/proxmox-release.gpg] http://download.proxmox.com/debian/pve bookworm pve-no-subscription" \
    | dd of=/etc/apt/sources.list.d/proxmox-release.list status=none \
  && echo "deb [signed-by=/usr/share/keyrings/proxmox-release.gpg] http://download.proxmox.com/debian/ceph-quincy bookworm main no-subscription" \
    | dd of=/etc/apt/sources.list.d/ceph.list status=none \
  && proxmox_keyid="0xf4e136c67cdce41ae6de6fc81140af8f639e0c39" \
  && curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=${proxmox_keyid}" \
    | gpg --dearmor -o /usr/share/keyrings/proxmox-release.gpg  \
  && apt-get -y -qq update \
  && apt-get -y -qq install proxmox-ve \
  && apt-get -y -qq full-upgrade \
  && echo "$(hostname) upgraded"'; done

10 August, 2025 06:48AM by C.J. Collier

August 09, 2025

hackergotchi for Bits from Debian

Bits from Debian

Debian stable is now Debian 13 "trixie"!

Alt trixie has been released

We are pleased to announce the official release of Debian 13, codenamed trixie!

What's New in Debian 13

  • Official support for RISC-V (64-bit riscv64), a major architecture milestone
  • Enhanced security through ROP and COP/JOP hardening on both amd64 and arm64 (Intel CET and ARM PAC/BTI support)
  • HTTP Boot support in Debian Installer and Live images for UEFI/U-Boot systems
  • Upgraded software stack: GNOME 48, KDE Plasma 6, Linux kernel 6.12 LTS, GCC 14.2, Python 3.13, and more

Want to install it?

Fresh installation ISOs are now available, including the final Debian Installer featuring kernel 6.12.38 and mirror improvements. Choose your favourite installation media and read the installation manual. You can also use an official cloud image directly on your cloud provider, or try Debian prior to installing it using our "live" images.

Already a happy Debian user and you only want to upgrade?

Full upgrade path from Debian 12 "bookworm" is supported and documented in the Release Notes. Upgrade notes cover APT source preparation, handling obsoletes, and ensuring system resilience.

Additional Information

For full details, including upgrade instructions, known issues, and contributors, see the official Release Notes for Debian 13 "trixie".

Congratulations to all developers, QA testers, and volunteers who made Debian 13 "trixie" possible!

Do you want to celebrate the release?

To celebrate with us on this occassion find a release party near to you and if there isn't any, organize one!

09 August, 2025 09:30PM by Anupa Ann Joseph

Thorsten Alteholz

My Debian Activities in July 2025

Debian LTS

This was my hundred-thirty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4255-1] audiofile security update of two CVEs related to an integer overflow and a memory leak.
  • [DLA 4256-1] libetpan security update to fix one CVE related to prevent a null pointer dereference.
  • [DLA 4257-1] libcaca security update to fix two CVEs related to heap buffer overflows.
  • [DLA 4258-1] libfastjson security update to fix one CVE related to an out of bounds write.
  • [#1106867] kmail-account-wizard was marked as accepted

I also continued my work on suricata, which turned out to be more challenging than expected. This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-fourth ELTS month. Unfortunately my allocated hours were far less than expected, so I couldn’t do as much work as planned.

Most of the time I spent with FD tasks and I also attended the monthly LTS/ELTS meeting. I further listened to the debusine talks during debconf. On the one hand I would like to use debusine to prepare uploads for embargoed ELTS issues, on the other hand I would like to use debusine to run the version of lintian that is used in the different releases. At the moment some manual steps are involved here and I tried to automate things. Of course like for LTS, I also continued my work on suricata.

Debian Printing

This month I uploaded a new upstream version of:

Guess what, I also started to work on a new version of hplip and intend to upload it in August.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new upstream versions of:

  • supernovas (sponsored upload to experimental)
  • calceph (sponsored upload to experimental)

I also uploaded the new package boinor. This is a fork of poliastro, which was retired by upstream and removed from Debian some months ago. I adopted it and rebranded it at the desire of upstream. boinor is the abbreviation of BOdies IN ORbit and I hope this software is still useful.

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

On my fight against outdated RFPs, I closed 31 of them in July. Their number is down to 3447 (how can you dare to open new RFPs? :-)). Don’t be afraid of them, they don’t bite and are happy to be released to a closed state.

FTP master

The peace will soon come to an end, so this month I accepted 87 and rejected 2 packages. The overall number of packages that got accepted was 100.

09 August, 2025 11:35AM by alteholz

Valhalla's Things

MOAR Pattern Weights

Posted on August 9, 2025
Tags: madeof:atoms

Six hexagonal blocks with a Standard Compliant sticker on top: mobian (blue variant), alizarin molecule, Use Jabber / Do Crime, #FreeSoftWear, indigotin molecule, The internet is ours with a cat that plays with yarn.

I’ve collected some more Standard Compliant stickers.

A picture of the lid of my laptop: a relatively old thinkpad carpeted with hexagonal stickers: Fediverse, a Debian swirl made of cat paw prints, #FreeSoftWear, 31 years of Debian, Open Source Hardware, XMPP, Ada Lovelace, rainbow holographic Fediverse, mobian (blue sticker), tails (cut from a round one), Use Jabber / Do Crime, LIFO, people consensually doing things together (center piece), GL-Como, Piecepack, indigotin, my phone runs debian btw, reproducible builds (cut from round), 4 freedoms in Italian (cut from round), Debian tea, alizarin, Software Heritage (cut from round), ournet.rocks (the cat also seen above), Python, this machine kills -9 daemons, 25 years of FOSDEM, Friendica, Flare. There are only 5 full hexagonal slots free.

Some went on my laptop, of course, but some were selected for another tool I use relatively often: more pattern weights like the ones I blogged about in February.

And of course the sources:

I have enough washers to make two more weights, and even more stickers, but the printer is currently not in use, so I guess they will happen a few months or so in the future.

09 August, 2025 12:00AM